CN110111347A - Logos extracting method, device and storage medium - Google Patents

Logos extracting method, device and storage medium Download PDF

Info

Publication number
CN110111347A
CN110111347A CN201910316438.3A CN201910316438A CN110111347A CN 110111347 A CN110111347 A CN 110111347A CN 201910316438 A CN201910316438 A CN 201910316438A CN 110111347 A CN110111347 A CN 110111347A
Authority
CN
China
Prior art keywords
image
value
frame
pixel
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910316438.3A
Other languages
Chinese (zh)
Other versions
CN110111347B (en
Inventor
饶洋
彭乐立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910316438.3A priority Critical patent/CN110111347B/en
Publication of CN110111347A publication Critical patent/CN110111347A/en
Application granted granted Critical
Publication of CN110111347B publication Critical patent/CN110111347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

This application provides a kind of logos extracting method, device and storage medium, method includes: to obtain the default continuous video image of frame in video display process;It determines the corresponding gray level image of video image described in every frame, obtains default frame gray level image;Edge Feature Points information is determined from gray level image using preset algorithm;Logos are extracted from video according to the Edge Feature Points information of default frame gray level image, to improve the accuracy that translucent image mark extracts in video, method is simple, and extraction effect is good.

Description

Logos extracting method, device and storage medium
Technical field
This application involves field of display technology, it is related to field of image processing, especially logos extracting method, device and deposits Storage media.
Background technique
In image recognition and image analysis, marginal information can be good at describing the chamfered shape of object.Pass through edge Detection can not only extract the shape feature of object itself, additionally it is possible to substantially reduce subsequent image and analyze data to be treated Amount, therefore Image Edge-Detection is a very important technology in image procossing, and it has been widely used for target knowledge Not, the fields such as target following, fingerprint recognition.
But the logos extracting method of the prior art is to translucent image characteristic region edge, because it has portion Divide the information changed with background content, therefore the traditional extraction process based on color information will be less accurate, extraction effect is weak, this gives Its use brings limitation.
In conclusion the logos of the prior art have that mark extraction effect translucent to image-region is weak.
Summary of the invention
This application provides a kind of logos extracting method, device and storage medium, translucent figure in video can be improved As the accuracy that mark extracts, method is simple, and extraction effect is good.
The application provides a kind of logos extracting method, is applied to electronic equipment, comprising:
In video display process, the default continuous video image of frame is obtained;
It determines the corresponding gray level image of video image described in every frame, obtains default frame gray level image;
Edge Feature Points information is determined from the gray level image using preset algorithm;
Logos are extracted from the video according to the Edge Feature Points information of the default frame gray level image.
The application provides a kind of logos extraction element, is applied to electronic equipment, comprising:
Module is obtained, in video display process, obtaining the default continuous video image of frame;
Determining module obtains default frame gray level image for determining the corresponding gray level image of video image described in every frame;
Computing module, for determining Edge Feature Points information from the gray level image using preset algorithm;
Extraction module, for being extracted from the video according to the Edge Feature Points information of the default frame gray level image Logos.
The embodiment of the present application also provides a kind of computer readable storage medium, and the storage medium has a plurality of instruction, described Instruction is suitable for by processor to execute any of the above-described logos extracting method.
Logos extracting method, device and storage medium provided by the present application are applied to electronic equipment, in video playing In the process, the default continuous video image of frame is obtained;It determines the corresponding gray level image of video image described in every frame, obtains default frame Gray level image;Edge Feature Points information is determined from the gray level image using preset algorithm;According to the default frame grayscale image The Edge Feature Points information of picture extracts logos from the video, extracts to improve translucent image mark in video Accuracy, method is simple, and extraction effect is good.
Detailed description of the invention
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, those skilled in the art's every other implementation obtained without creative efforts Example, shall fall in the protection scope of this application.
Fig. 1 is the flow diagram of logos extracting method provided by the embodiments of the present application.
Fig. 2 is the flow diagram of step S101 provided by the embodiments of the present application.
Fig. 3 is the flow diagram of step S103 provided by the embodiments of the present application.
Fig. 4 is the schematic diagram of a scenario of logos extracting method provided by the embodiments of the present application
Fig. 5 is the structural schematic diagram for applying for logos extraction element provided by embodiment.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts Example, shall fall in the protection scope of this application.
A kind of logos extracting method is applied to electronic equipment, comprising: in video display process, obtains default frame Continuous video image;It determines the corresponding gray level image of video image described in every frame, obtains default frame gray level image;Using default Algorithm determines Edge Feature Points information from the gray level image;According to the Edge Feature Points information of the default frame gray level image Logos are extracted from the video.
As shown in FIG. 1, FIG. 1 is the flow diagram of logos extracting method provided by the embodiments of the present application, the images Indicate that extracting method is applied to electronic equipment, detailed process can be such that
S101. in video display process, the default continuous video image of frame is obtained.
In the present embodiment, obtaining the default continuous video image of frame is arbitrarily to choose in the video.Frame is video image The single width image frame of middle minimum unit, each lattice camera lens being equivalent in film.Each frame is all static image, is quickly connected Show that frame just forms the illusion of the video image of movement continuously, the default continuous video image of frame refers to Fig. 4.
For example, referring to Fig. 2, above-mentioned steps S101 can specifically include following steps:
S1011. the rgb value of pixel in video image described in every frame is obtained.
In the present embodiment, obtaining video image described in every frame is color image, and the pixel of video image is by R/G/B tri- A component composition, wherein R is the red component of the color image, and G is the green component of the color image, and B is the coloured silk The blue component of chromatic graph picture;Then each point is to respectively indicate R/G/B by three bytes;R/G/B is commonly divided into 0 to 255 totally 256 A rank, wherein 0 most dark (completely black), 255 most bright (Quan Bai).
S1012. by rgb value from RGB channel model conversion at HSV channel pattern, obtain corresponding HSV value.
In the present embodiment, RGB channel model is the color for being divided into the RGB of three Color Channels red (R), green (G) and blue (B) Color mode distributes the intensity in 0~255 range using the RGB component that RGB channel model is each pixel in image Value;HSV channel pattern is channel form and aspect (H), the saturation degree of three color intuitive natures being divided into according to color intuitive nature (S) and the mode of the HSV of lightness (V), this HSV channel pattern are more conform with the habit of human eye.
Wherein, HSV channel pattern includes HSV value (Hue form and aspect, Saturation saturation degree, Value (Brightness) Lightness is also HSB).
Further, above-mentioned steps S1012 can use following formula (1-1) Lai Shixian:
V=max
Wherein, H is the form and aspect of pixel, range 0-1;S is the saturation degree of the pixel, range 0-1;V is the picture The lightness of vegetarian refreshments, range 0-1;Max is the maximum value in pixel R, G and B value, in the range of 0-255;Min is the picture Minimum value in vegetarian refreshments R, G and B value, in the range of 0-255.
S1013. the gray level image of corresponding frame video image is determined according to the HSV value of the pixel.
In the present embodiment, referred to according to the gray level image that the HSV value of the pixel determines containing only luminance information, be free of The black white image that the image of color information, i.e. ordinary meaning are said: the brightness of the gray level image is by secretly to bright, variation is continuous.
It should be pointed out that H is the saturation degree of the pixel in the present embodiment, related with color, S is saturation degree, with The depth of default frame correspondence image is related, and as S=0, the pixel only has gray scale, and V is lightness, indicates the bright journey of color Degree, but with light intensity without directly contacting.The embodiment of the present application is exactly to eliminate in the default frame video image in this theoretical basis The H value of the pixel, and then achieve the purpose that the gray level image for determining corresponding frame video image.
S102. it determines the corresponding gray level image of video image described in every frame, obtains default frame gray level image.
In the present embodiment, pixel pair need to be calculated according to the corresponding HSV value of the pixel by obtaining default frame gray level image The grayscale image value answered, and then default frame gray scale figure is obtained, step S102 can specifically include following two step:
S1021. corresponding target value is calculated according to the intensity value of the pixel.
In the present embodiment, the formula (1-2) of the target value of the pixel is
T=1-S (1-2)
Wherein, T is the corresponding target value of intensity value S of the pixel.
S1022. the target value is multiplied with the brightness value of corresponding pixel points, obtains the gray scale shadow of the pixel Picture value.
In the present embodiment, the grayscale image value formula (1-3) of the pixel is;
SV=T × V (1-3)
Wherein, V is the lightness of the pixel, and SV is the grayscale image value of the pixel, and the range of T, V, S and the SV are 0-1.Further, the S value with V value of the pixel calculate in above-mentioned formula (1-1).It is default that the calculation method eliminates this The H value of the pixel in frame video image, and then can determine the gray level image of corresponding frame video image.
S1023. the gray level image that corresponding frame video image is determined according to the grayscale image value, obtains default frame grayscale image Picture.
In the present embodiment, it should be pointed out that the grayscale image value SV of the pixel is the SV of the pixel Value is entirely to preset frame video image, this is right for the gray level image for determining corresponding frame video image in this step S1022 Frame video image is answered, similarly, which includes the grayscale image value of all pixels point on image.Therefore, this reality The position that example needs to be traversed for each pixel is applied, and then determines the gray level image of default frame video image.
For example, it is assumed that the default frame video image is the image that X row Y column pixel is constituted, wherein X and Y take natural number, The image can be approximately considered the matrix arranged in an X row Y, and the coordinate value of single pixel point is represented by (i, j), (i, j) mark The coordinate value of the pixel of i-th row jth column, and 0 < i≤X, 0 < j≤Y.The step of traversal includes: to advance from the 1st row to X Row scanning is carrying out the traversal that the 1st column arranges to Y to the scan line, it is assumed that ergodic process to the i-th row (0 < i≤X), it need to be I-th traveling ranks traversal, such as (i, 1), (i, 2), (i, 3) ... (i, Y), when traversal is to pixel (i, Y), traversal of changing one's profession I+1 row traverses i+1 traveling ranks, such as (i+1,1), (i+1,2), (i+1,3) ... (i+1, Y), so circulation behaviour Make, until the pixel that the last one pixel of traversal to X row, i.e. coordinate value are (X, Y), can be completed entirely default frame The traversing operation of video image, so that it is determined that the gray level image of default frame video image.
Similarly, the step of traversal can also include: to be scanned from the 1st column to Y column, carry out the to the scan columns The traversal that 1 row is arranged to Y, it is assumed that ergodic process to jth arranges (0 < j≤Y), need to only arrange in jth and carry out capable traversal, such as (1, j), (2, j), (3, j) ... (X, j) turns column traversal j+1 column, carries out capable traversal to the column of jth+1 when traversal is to pixel (X, j), Such as (1, j+1), (2, j+1), (3, j+1) ... (X, j+1), such circulate operation, until traversing the last one arranged to Y Pixel, i.e. coordinate value are the pixel of (X, Y), and the traversing operation of entire default frame video image can be completed, so that it is determined that The gray level image of default frame video image.
S103. Edge Feature Points information is determined from the gray level image using preset algorithm.
In the present embodiment, preset algorithm includes the classical edge extracting method of image procossing, specifically includes first differential Operator and Second Order Differential Operator, first order differential operator include Luo Baici operator (Roberts) and Sobel Operator (Sobel) etc., Wherein, Sobel Operator is to make convolution to image with vertically and horizontally two direction templates to carry out edge detection, due to the letter of its method List, calculation amount is small, processing speed is fast, commonly used in real time image processing system, i.e., what the present embodiment used is exactly to utilize rope Bell's operator determines Edge Feature Points information from the gray level image.
For example, referring to Fig. 3, above-mentioned steps S103 can specifically include following steps:
S1031. using the gray level image for presetting frame video image described in high pass filter, processes, processed image is obtained.
In the present embodiment, Fourier transformation calculated to the gray level image of the default frame video image first, in Fu Leaf transformation formula (1-4) are as follows:
K=1,2 ..., X b=1,2 ..., Y
Wherein, X, Y are respectively the total pixel number on default frame video image is horizontal and vertical, and SV (k, b) is default frame view Frequency image is the grayscale image value at (k, b) traversing coordinate value, wherein k is 1 positive integer for arriving X, and b arrives the just whole of Y for 1 Number;SV (i, j) be coordinate value be the i-th row jth column pixel Fourier transformation grayscale image value, the embodiment of the present application with The coordinate points are the pixel of (i, j), carry out following step.
Then high-pass filtering, Filtering Formula (1-5) are carried out to the grayscale image value of Fourier transformation are as follows:
The transmission function formula (1-6) of filter meets:
Wherein, d0 is distance of the default cutoff frequency to origin, and d (i, j) is the distance that point (i, j) arrives origin, H (i, j) The transmission function of the filter of pixel (i, j) is traversed for filter, and in the embodiment of the present application, each pixel is corresponding Filter transmission function it is identical.
The transmission function for passing through filter filtering to the grayscale image value of Fourier transformation obtains the gray scale of filtering Fourier Image value, the grayscale image value formula (1-7) are as follows:
G (i, j)=SV (i, j) × H (i, j) (1-7)
Wherein, SV (i, j) is the grayscale image value for the Fourier transformation that coordinate value is pixel (i, j), by above-mentioned formula (1-4) is calculated;H (i, j) is the transmission function for the filter that filter traverses pixel (i, j), and formula is (1-6);G (i, j) is the grayscale image value of the filtering Fourier of pixel (i, j).
The grayscale image value G (i, j) for filtering Fourier is finally subjected to inverse Fourier transform, obtains what high-pass filtering obtained Image, the inverse Fourier transform (1-8) are as follows:
K=1,2 ..., X
B=1,2 ..., Y
G (i, the j) coordinate value is the filtered image gray value of the pixel of (i, j), in filtering, is filtered in Fu The grayscale image value G (i, j) of leaf is needed by ergodic process, and i value is by 1 value to X, and j is by 1 value to Y, the coordinate points (i, j) It can traverse to the last one pixel of the filtering image, i.e. coordinate value is the pixel of (X, Y), obtains filtered image ash Angle value g (i, j), and then obtain the image that high-pass filtering obtains, i.e. processed image.
S1032. the processed image is sharpened with Sobel Operator, calculates the default frame video image The conversion gradation values of all pixels point in gray level image.
In the present embodiment, the step of whether pixel is marginal point judged for following for conversion gradation values, the rope Bell's operator formula (1-9) are as follows:
Processed image is sharpened using Sobel Operator, for point (i, j) gradient value by following formula (1- 10) it calculates:
Wherein, g (i, j) coordinate value is the filtered image gray value of the pixel of (i, j), and Si and Sj are respectively that image exists Horizontal, vertical direction gradient value, the conversion gradation values S (i, j) of pixel (i, j) are calculated by following formula (1-11):
It should be noted that step S1031-S1032's finally obtains the converting gradation for being coordinate points (i, j) for pixel Value, this step S1031-S1032 complete the step, need to be traversed for each pixel gray level image value in the default frame image, The conversion gradation values of each pixel are obtained, the step S1022 of the method for this traversal in the present embodiment is described, herein no longer It repeats.
S1033. judge whether the conversion gradation values of the pixel are not less than the first preset threshold, if so, under executing Step S1034 is stated, if it is not, then executing above-mentioned steps S1032.
In the present embodiment, the first preset threshold is the best transition gray value being manually set, and the value is mainly according to figure As being determined according to the brightness value at the edge of pixel in edge processing.When whether the conversion gradation values of the pixel are not small In first preset threshold, i.e., the described pixel may be configured as marginal point, can determine the edge of entire default frame image with this Characteristic point information.
S1034. the image information of the pixel is then determined as Edge Feature Points information.
In the present embodiment, the conversion gradation values of the pixel are the value obtained after step S1032 sharpening, the value Due to there is striking contrast degree, therefore, the gray value after can converting according to the pixel judges that the pixel is by sharpening The no Edge Feature Points information for this default frame image.
S104. image mark is extracted from the video according to the Edge Feature Points information of the default frame gray level image Will.
In the present embodiment, the Edge Feature Points information of the default frame gray level image is calculated by above-mentioned steps S103, The step of logos are extracted in this step can specifically include following two step:
S1041. the Edge Feature Points information of same pixel in the default frame gray level image is subjected to tired multiply.
In the present embodiment, the Edge Feature Points information of same pixel in the default frame gray level image is carried out tired Multiply, when default frame gray level image is nth frame, wherein nth frame is any one frame video image in the video display process, herein It include M frame grayscale image before nth frame, M≤N-1, wherein the M frame grayscale image includes: the 1st frame grayscale image, 2 frame gray scale shadows Picture ... M frame grayscale image;For example, as M=N-1, which includes: the 1st frame grayscale image, 2 frame gray scale shadows Picture ... N-1 frame grayscale image.In the present embodiment, i.e., by the grayscale image value SV of nth frame and the pixel of preceding M frame image It is worth corresponding position to be multiplied, obtains tired multiplying value Q, formula (1-12) are as follows:
Wherein, SVSnWhen indicating tired and multiplying to n-th frame grayscale image, corresponding same pixel gray level image value SV value, wherein n Value range be N-M to N positive integer.For example, as M=N-1, i.e., before this tired multiply value and be represented by following formula (1- 13):
Q=SVS(N-M)×SVS(N-M+1)×SVS(N-M+2)……SVSN (1-13)
Specifically, assuming that N takes 20, that is, default frame grayscale image is indicated, for the 20th frame grayscale image in the video playing; M takes 16, which includes: the 1st frame, the 2nd frame, the 15th frame of the 3rd frame ..., then the tired value that multiplies is by 15 frames to the 20th The grayscale image value SV of image between frame, which tire out, to be multiplied, i.e., this, which tires out, multiplies the 17th frame of value, the 18th frame, the 19th frame to the same of the 20th frame The grayscale image value SV of one pixel is tired to be multiplied, the product of the tired SV for multiplying the same pixel that value is the 4 frame image.
It should be noted that the tired value that multiplies is that the tired of same pixel in default frame gray level image multiplies value in the present embodiment, This step S1041 completes the step, needs to be traversed for the grayscale image value SV of each pixel in the default frame image, obtains every The tired of grayscale image value of a pixel multiplies value, and the step S1022 of the method for this traversal in the present embodiment is described, herein not It repeats again.
S1042. the tired figure for multiplying pixel formation of the value greater than the second preset threshold is carried out as logos It extracts.
In the present embodiment, for preceding step to describe, the range of saturation degree S value and lightness V are 0-1, grayscale image value SV Due to the step of having carried out tired multiply, when marginal information point information is identical, the grayscale image value of the pixel is carrying out tired multiplying step When, the variation range for tiring out product is little;When marginal points information difference, the grayscale image value SV of the pixel can level off to one It is a be similar to 0 value;Therefore, the second preset threshold in the embodiment of the present application can for one level off to 0 value, such as 0.05. Further, user can set preset threshold according to the precision for extracting logos, and for example, former preset threshold is 0.05, when user is in logos extraction process, the subjective or objective extraction effect for thinking this mark extracting method is weak, can It is artificial to adjust the preset threshold to less than the value of former preset threshold, such as 0.005, and then strengthen logos extraction The precision of method.
It can be seen from the above, logos extracting method provided in this embodiment, is applied to electronic equipment, in video playing mistake Cheng Zhong obtains the default continuous video image of frame;It determines the corresponding gray level image of video image described in every frame, obtains default frame ash Spend image;Edge Feature Points information is determined from the gray level image using preset algorithm;According to the default frame gray level image Edge Feature Points information extract logos from the video, improve translucent image mark in video extract it is accurate Property, method is simple, and extraction effect is good.
The method according to described in above-described embodiment, the present embodiment by from the angle of logos extraction element further into Row description, the logos extraction element can be used as independent entity specifically to realize.
The present embodiment provides a kind of logos extraction element and systems.
Referring to Fig. 4, the system may include any logos extraction element provided by the embodiment of the present invention, it should Logos extraction element specifically can integrate in the electronic equipments such as server or terminal.
Wherein, electronic equipment obtains the default continuous video image of frame in video display process;Determine view described in every frame The corresponding gray level image of frequency image obtains default frame gray level image;Edge is determined from the gray level image using preset algorithm Characteristic point information;Logos are extracted from the video according to the Edge Feature Points information of the default frame gray level image.
Wherein, which is the image arbitrarily chosen in video.The Edge Feature Points information can To include texture, shape and spatial relationship etc..Specifically the continuous video image of frame can be preset for this by depth model, really Determine the corresponding gray level image of every frame video image and obtain default frame gray level image, determines the edge feature of default frame gray level image Point information, then logos are extracted from Edge Feature Points information.For example, when user obtains the default continuous video figure of frame Picture, which includes logos, determining, analysis and is extracted by the logos device, can will be pre- If the mark in frame video image extracts.
Referring to Fig. 5, logos extraction element provided by the embodiments of the present application has been described in detail in Fig. 5, it is applied to electronics Equipment, the electronic equipment may include the equipment that mobile phone, tablet computer, individual PC etc. have image display function.The image mark Will extraction element may include: to obtain module 10, determining module 20, computing module 30 and extraction module 40, in which:
(1) module 10 is obtained
Module 10 is obtained, in video display process, obtaining the default continuous video image of frame.
In the present embodiment, obtaining the default continuous video image of frame is arbitrarily to choose in the video.Frame is video image The single width image frame of middle minimum unit, each lattice camera lens being equivalent in film.Each frame is all static image, is quickly connected Show that frame just forms the illusion of the video image of movement continuously, the default continuous video image of frame refers to Fig. 4.
For example, the acquisition module 10 is specifically used for when the default frame of acquisition continuous video image:
(11) obtains the rgb value of pixel in video image described in every frame.
In the present embodiment, obtaining video image described in every frame is color image, and the pixel of video image is by R/G/B tri- A component composition, wherein R is the red component of the color image, and G is the green component of the color image, and B is the coloured silk The blue component of chromatic graph picture;Then each point is to respectively indicate R/G/B by three bytes;R/G/B is commonly divided into 0 to 255 totally 256 A rank, wherein 0 most dark (completely black), 255 most bright (Quan Bai).
(12) by rgb value from RGB channel model conversion at HSV channel pattern, obtain corresponding HSV value.
In the present embodiment, RGB channel model is the color for being divided into the RGB of three Color Channels red (R), green (G) and blue (B) Color mode distributes the intensity in 0~255 range using the RGB component that RGB channel model is each pixel in image Value;HSV channel pattern is channel form and aspect (H), the saturation degree of three color intuitive natures being divided into according to color intuitive nature (S) and the mode of the HSV of lightness (V), this HSV channel pattern are more conform with the habit of human eye.
Wherein, HSV channel pattern includes HSV value (Hue form and aspect, Saturation saturation degree, Value (Brightness) Lightness is also HSB).
Further, by RGB channel model conversion at corresponding HSV channel pattern, specifically using following formula (2-1) come It realizes:
V=max
Wherein, H is the form and aspect of pixel, range 0-1;S is the saturation degree of the pixel, range 0-1;V is the picture The lightness of vegetarian refreshments, range 0-1;Max is the maximum value in pixel R, G and B value, in the range of 0-255;Min is the picture Minimum value in vegetarian refreshments R, G and B value, in the range of 0-255.
(13) determines the gray level image of corresponding frame video image according to the HSV value of the pixel.
In the present embodiment, referred to according to the gray level image that the HSV value of the pixel determines containing only luminance information, be free of The black white image that the image of color information, i.e. ordinary meaning are said: the brightness of the gray level image is by secretly to bright, variation is continuous.
It should be pointed out that H is the saturation degree of the pixel in the present embodiment, related with color, S is saturation degree, with The depth of default frame correspondence image is related, and as S=0, the pixel only has gray scale, and V is lightness, indicates the bright journey of color Degree, but with light intensity without directly contacting.The embodiment of the present application is exactly to eliminate in the default frame video image in this theoretical basis The H value of the pixel, and then achieve the purpose that the gray level image for determining corresponding frame video image.
(2) determining module 20
Determining module 20 obtains default frame gray level image for determining the corresponding gray level image of video image described in every frame.
In the present embodiment, pixel pair need to be calculated according to the corresponding HSV value of the pixel by obtaining default frame gray level image The grayscale image value answered, and then default frame gray scale figure is obtained, which is specifically used for:
(21) calculates corresponding target value according to the intensity value of the pixel.
In the present embodiment, the formula (2-2) of the target value of the pixel are as follows:
T=1-S (2-2)
Wherein, T is the corresponding target value of intensity value S of the pixel.
(22) target value is multiplied by with the brightness value of corresponding pixel points, obtains the gray scale shadow of the pixel Picture value.
In the present embodiment, the grayscale image value formula (2-3) of the pixel are as follows:
SV=T × V (2-3)
Wherein, V is the lightness of the pixel, and SV is the grayscale image value of the pixel, and the range of T, V, S and the SV are 0-1.Further, the S value with V value of the pixel calculate in above-mentioned formula (2-1).It is default that the calculation method eliminates this The H value of the pixel in frame video image, and then can determine the gray level image of corresponding frame video image.
(23) gray level image that corresponding frame video image is determined according to the grayscale image value, obtains default frame grayscale image Picture.
In the present embodiment, it should be pointed out that the grayscale image value SV of the pixel is the SV of the pixel Value is entirely to preset frame video image, this is right for the gray level image for determining corresponding frame video image in this determining module 20 Frame video image is answered, similarly, which includes the grayscale image value of all pixels point on image.Therefore, this reality The position that example needs to be traversed for each pixel is applied, and then determines the gray level image of default frame video image.
For example, it is assumed that the default frame video image is the image that X row Y column pixel is constituted, wherein X and Y take natural number, The image can be approximately considered the matrix arranged in an X row Y, and the coordinate value of single pixel point is represented by (i, j), (i, j) mark The coordinate value of the pixel of i-th row jth column, and 0 < i≤X, 0 < j≤Y.The step of traversal includes: to advance from the 1st row to X Row scanning is carrying out the traversal that the 1st column arranges to Y to the scan line, it is assumed that ergodic process to the i-th row (0 < i≤X), it need to be I-th traveling ranks traversal, such as (i, 1), (i, 2), (i, 3) ... (i, Y), when traversal is to pixel (i, Y), traversal of changing one's profession I+1 row traverses i+1 traveling ranks, such as (i+1,1), (i+1,2), (i+1,3) ... (i+1, Y), so circulation behaviour Make, until the pixel that the last one pixel of traversal to X row, i.e. coordinate value are (X, Y), can be completed entirely default frame The traversing operation of video image, so that it is determined that the gray level image of default frame video image.
Similarly, the step of traversal can also include: to be scanned from the 1st column to Y column, carry out the to the scan columns The traversal that 1 row is arranged to Y, it is assumed that ergodic process to jth arranges (0 < j≤Y), need to only arrange in jth and carry out capable traversal, such as (1, j), (2, j), (3, j) ... (X, j) turns column traversal j+1 column, carries out capable traversal to the column of jth+1 when traversal is to pixel (X, j), Such as (1, j+1), (2, j+1), (3, j+1) ... (X, j+1), such circulate operation, until traversing the last one arranged to Y Pixel, i.e. coordinate value are the pixel of (X, Y), and the traversing operation of entire default frame video image can be completed, so that it is determined that The gray level image of default frame video image.
(3) computing module 30
Computing module 30, for determining Edge Feature Points information from the gray level image using preset algorithm.
In the present embodiment, preset algorithm includes the classical edge extracting method of image procossing, specifically includes first differential Operator and Second Order Differential Operator, first order differential operator include Luo Baici operator (Roberts) and Sobel Operator (Sobel) etc., Wherein, Sobel Operator is to make convolution to image with vertically and horizontally two direction templates to carry out edge detection, due to the letter of its method List, calculation amount is small, processing speed is fast, commonly used in real time image processing system, i.e., what the present embodiment used is exactly to utilize rope Bell's operator determines Edge Feature Points information from the gray level image.
For example, above-mentioned computing module 30 specifically can be used for:
(31) obtains processed image using the gray level image for presetting frame video image described in high pass filter, processes.
In the present embodiment, Fourier transformation calculated to the gray level image of the default frame video image first, in Fu Leaf transformation formula (2-4) are as follows:
K=1,2 ..., X b=1,2 ..., Y
Wherein, X, Y are respectively the total pixel number on default frame video image is horizontal and vertical, and SV (k, b) is default frame view Frequency image is the grayscale image value at (k, b) traversing coordinate value, wherein k is 1 positive integer for arriving X, and b arrives the just whole of Y for 1 Number;SV (i, j) be coordinate value be the i-th row jth column pixel Fourier transformation grayscale image value, the embodiment of the present application with The coordinate points are the pixel of (i, j).
Then high-pass filtering, Filtering Formula (1-5) are carried out to the grayscale image value of Fourier transformation are as follows:
The transmission function formula (2-6) of filter meets:
Wherein, d0 is distance of the default cutoff frequency to origin, and d (i, j) is the distance that point (i, j) arrives origin, H (i, j) The transmission function of the filter of pixel (i, j) is traversed for filter, and in the embodiment of the present application, each pixel is corresponding Filter transmission function it is identical.
The transmission function for passing through filter filtering to the grayscale image value of Fourier transformation obtains the gray scale of filtering Fourier Image value, the grayscale image value formula (2-7) are as follows:
G (i, j)=SV (i, j) × H (i, j) (2-7)
Wherein, SV (i, j) is the grayscale image value for the Fourier transformation that coordinate value is pixel (i, j), by above-mentioned formula (2-4) is calculated;H (i, j) is the transmission function for the filter that filter traverses pixel (i, j), and formula is (2-6);G (i, j) is the grayscale image value of the filtering Fourier of pixel (i, j).
The grayscale image value G (i, j) for filtering Fourier is finally subjected to inverse Fourier transform, obtains what high-pass filtering obtained Image, inverse Fourier transform (2-8) are as follows:
K=1,2 ..., X b=1,2 ..., Y
G (i, j) coordinate value is the filtered image gray value of the pixel of (i, j), in filtering, filters Fourier Grayscale image value G (i, j) need by ergodic process, i value is by 1 value to X, and j is by 1 value to Y, and the coordinate points (i, j) are i.e. It can traverse to the last one pixel of the filtering image, i.e. coordinate value is the pixel of (X, Y), obtains filtered image gray scale Value g (i, j), and then obtain the image that high-pass filtering obtains, i.e. processed image.
(32) is sharpened the processed image with Sobel Operator, calculates the default frame video image The conversion gradation values of all pixels point in gray level image.
In the present embodiment, the step of whether pixel is marginal point judged for following for conversion gradation values, the rope Bell's operator (2-9) are as follows:
Processed image is sharpened using Sobel Operator, for point (i, j) gradient value by following formula (2- 10) it calculates:
Wherein, g (i, j) coordinate value is the filtered image gray value of the pixel of (i, j), and Si and Sj are respectively that image exists Horizontal, vertical direction gradient value, the conversion gradation values S (i, j) of pixel (i, j) are calculated by following formula (2-11):
It should be noted that this computing module 30 finally obtains the conversion gradation values for being coordinate points (i, j) for pixel, meter Calculation module 30 needs to be traversed for each pixel gray level image value in the default frame image, obtains the converting gradation of each pixel It is worth, is described in the acquisition module 10 of the method for this traversal in the present embodiment, details are not described herein.
(33) judges whether the conversion gradation values of the pixel are not less than the first preset threshold.
In the present embodiment, the first preset threshold is the best transition gray value being manually set, and the value is mainly according to figure As being determined according to the brightness value at the edge of pixel in edge processing.When whether the conversion gradation values of the pixel are not small In first preset threshold, i.e., the described pixel may be configured as marginal point, can determine the edge of entire default frame image with this Characteristic point information.
(34) when the conversion gradation values of pixel described in are not less than first preset threshold, then by the pixel Image information is determined as Edge Feature Points information.
In the present embodiment, the conversion gradation values of the pixel are the values obtained after the computing module 30 sharpening, The value is due to having striking contrast degree, therefore, the gray value after can converting according to the pixel judges the pixel by sharpening Point whether be this default frame image Edge Feature Points information.
(4) extraction module 40
Extraction module 40, for being extracted from the video according to the Edge Feature Points information of the default frame gray level image Logos out.
In the present embodiment, the Edge Feature Points information of the default frame gray level image is calculated by above-mentioned acquisition module 10 Out, extraction module 40 is specifically used for:
(41) the Edge Feature Points information of same pixel in the default frame gray level image is carried out tired multiply by.
In the present embodiment, the Edge Feature Points information of same pixel in the default frame gray level image is carried out tired Multiply, when default frame gray level image is nth frame, wherein nth frame is any one frame video image in the video display process, herein It include M frame grayscale image before nth frame, M≤N-1, wherein the M frame grayscale image includes: the 1st frame grayscale image, 2 frame gray scale shadows Picture ... M frame grayscale image;For example, as M=N-1, which includes: the 1st frame grayscale image, 2 frame gray scale shadows Picture ... N-1 frame grayscale image.In the present embodiment, i.e., by the grayscale image value SV of nth frame and the pixel of preceding M frame image It is worth corresponding position to be multiplied, obtains tired multiplying value Q, formula (2-12) are as follows:
Wherein, SVSnWhen indicating tired and multiplying to n-th frame grayscale image, corresponding same pixel gray level image value SV value, wherein n Value range be N-M to N positive integer.For example, as M=N-1, i.e., before this tired multiply value and be represented by following formula (1- 13):
Q=SVS(N-M)×SVS(N-M+1)×SVS(N-M+2)……SVSN (2-13)
Specifically, assuming that N takes 20, that is, default frame grayscale image is indicated, for the 20th frame grayscale image in the video playing; M takes 16, which includes: the 1st frame, the 2nd frame, the 15th frame of the 3rd frame ..., then the tired value that multiplies is by 15 frames to the 20th The grayscale image value SV of image between frame, which tire out, to be multiplied, i.e., this, which tires out, multiplies the 17th frame of value, the 18th frame, the 19th frame to the same of the 20th frame The grayscale image value SV of one pixel is tired to be multiplied, the product of the tired SV for multiplying the same pixel that value is the 4 frame image.
It should be noted that the tired value that multiplies is that the tired of same pixel in default frame gray level image multiplies value in the present embodiment, This extraction module 40 needs to be traversed for the grayscale image value SV of each pixel in the default frame image, obtains each pixel The tired of grayscale image value multiplies value, is described in the acquisition module 10 of the method for this traversal in the present embodiment, details are not described herein.
(42) mentions the tired figure for multiplying pixel formation of the value greater than the second preset threshold as logos It takes.
In the present embodiment, for preceding step to describe, the range of saturation degree S value and lightness V are 0-1, grayscale image value SV Due to the step of having carried out tired multiply, when marginal information point information is identical, the grayscale image value of the pixel is carrying out tired multiplying step When, the variation range for tiring out product is little;When marginal points information difference, the grayscale image value SV of the pixel can level off to one It is a be similar to 0 value;Therefore, the second preset threshold in the embodiment of the present application can for one level off to 0 value, such as 0.05. Further, user can set preset threshold according to the precision for extracting logos, and for example, former preset threshold is 0.05, when user is in logos extraction process, the subjective or objective extraction effect for thinking this mark extracting method is weak, can It is artificial to adjust the preset threshold to less than the value of former preset threshold, such as 0.005, and then strengthen logos extraction The precision of method.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not It repeats again.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one In storage media, and is loaded and executed by processor.For this purpose, the embodiment of the present application provides a kind of storage medium, wherein storing There is a plurality of instruction, which can be loaded by processor, to execute any image mark provided by the embodiment of the present application Step in will extracting method.
Wherein, which may include: read-only memory (Read Only Memory, abbreviation ROM), arbitrary access Memory body (Random Access Memory, abbreviation RAM), disk or CD etc..
By the instruction stored in the storage medium, any image mark provided by the embodiment of the present application can be executed Step in will extracting method, it is thereby achieved that any logos extracting method institute provided by the embodiment of the present application The beneficial effect being able to achieve is detailed in the embodiment of front, and details are not described herein.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Although above preferred embodiment is not to limit in conclusion the application is disclosed above with preferred embodiment The application processed, those skilled in the art are not departing from spirit and scope, can make various changes and profit Decorations, therefore the protection scope of the application subjects to the scope of the claims.

Claims (10)

1. a kind of logos extracting method, which is characterized in that the described method includes:
In video display process, the default continuous video image of frame is obtained;
It determines the corresponding gray level image of video image described in every frame, obtains default frame gray level image;
Edge Feature Points information is determined from the gray level image using preset algorithm;
Logos are extracted from the video according to the Edge Feature Points information of the default frame gray level image.
2. logos extracting method according to claim 1, which is characterized in that described to obtain the default continuous video of frame Image specifically includes:
Obtain the rgb value of pixel in video image described in every frame;
Rgb value is obtained into corresponding HSV value at HSV channel pattern from RGB channel model conversion;
The gray level image of corresponding frame video image is determined according to the HSV value of the pixel.
3. logos extracting method according to claim 2, which is characterized in that the HSV value includes hue value, saturation Angle value and brightness value, the gray level image that corresponding frame video image is determined according to the HSV value of the pixel, specifically include:
Corresponding target value is calculated according to the intensity value of the pixel;
The target value is multiplied with the brightness value of corresponding pixel points, obtains the grayscale image value of the pixel;
The gray level image that corresponding frame video image is determined according to the grayscale image value obtains default frame gray level image.
4. logos extracting method according to claim 1, which is characterized in that described to utilize preset algorithm from the ash It spends and determines Edge Feature Points information in image, specifically include:
Using the gray level image for presetting frame video image described in high pass filter, processes, processed image is obtained;
The processed image is sharpened with Sobel Operator, in the gray level image for calculating the default frame video image The conversion gradation values of all pixels point;
Judge whether the conversion gradation values of the pixel are not less than the first preset threshold, if so, by the figure of the pixel As information, it is determined as Edge Feature Points information.
5. requiring the logos extracting method according to claim 1, which is characterized in that described according to the default frame ash The Edge Feature Points information of degree image extracts logos from the video, specifically includes:
The Edge Feature Points information of same pixel in the default frame gray level image is subjected to tired multiply;
The tired figure for multiplying pixel formation of the value greater than the second preset threshold is extracted as logos.
6. a kind of logos extraction element, which is characterized in that described device includes:
Module is obtained, in video display process, obtaining the default continuous video image of frame;
Determining module obtains default frame gray level image for determining the corresponding gray level image of video image described in every frame;
Computing module, for determining Edge Feature Points information from the gray level image using preset algorithm;
Extraction module, for extracting image from the video according to the Edge Feature Points information of the default frame gray level image Mark.
7. logos extraction element according to claim 6, which is characterized in that the acquisition module is specifically used for:
Obtain the rgb value of pixel in video image described in every frame;
Rgb value is obtained into corresponding HSV value at HSV channel pattern from RGB channel model conversion;
The gray level image of corresponding frame video image is determined according to the HSV value of the pixel.
8. logos extraction element according to claim 6, which is characterized in that the HSV value includes hue value, saturation Angle value and brightness value, the determining module are specifically used for:
Corresponding target value is calculated according to the intensity value of the pixel;
The target value is multiplied with the brightness value of corresponding pixel points, obtains the grayscale image value of the pixel;
The gray level image that corresponding frame video image is determined according to the grayscale image value obtains default frame gray level image.
9. logos extraction element according to claim 6, which is characterized in that the extraction module is specifically used for:
The Edge Feature Points information of same pixel in the default frame gray level image is subjected to tired multiply;
The tired figure for multiplying pixel formation of the value greater than preset threshold is extracted as logos.
10. a kind of computer readable storage medium, which is characterized in that be stored with a plurality of instruction, the finger in the storage medium It enables being suitable for being loaded by processor and 1 to 5 described in any item logos extracting methods is required with perform claim.
CN201910316438.3A 2019-04-19 2019-04-19 Image sign extraction method, device and storage medium Active CN110111347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910316438.3A CN110111347B (en) 2019-04-19 2019-04-19 Image sign extraction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910316438.3A CN110111347B (en) 2019-04-19 2019-04-19 Image sign extraction method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110111347A true CN110111347A (en) 2019-08-09
CN110111347B CN110111347B (en) 2021-04-27

Family

ID=67485901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910316438.3A Active CN110111347B (en) 2019-04-19 2019-04-19 Image sign extraction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110111347B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160215A (en) * 2019-12-25 2020-05-15 Tcl华星光电技术有限公司 Image identifier extraction device, and brightness adjustment method and device of image identifier
WO2021087773A1 (en) * 2019-11-05 2021-05-14 深圳市欢太科技有限公司 Recognition method and apparatus, electronic device, and storage medium
CN113688849A (en) * 2021-08-30 2021-11-23 中国空空导弹研究院 Gray level image sequence feature extraction method for convolutional neural network
CN113923513A (en) * 2021-09-08 2022-01-11 浙江大华技术股份有限公司 Video processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408942A (en) * 2008-04-17 2009-04-15 浙江师范大学 Method for locating license plate under a complicated background
US20090278989A1 (en) * 2008-05-11 2009-11-12 Cheon-Ho Bae Sharpness enhancing apparatus and method
KR20120058851A (en) * 2010-11-30 2012-06-08 엘지디스플레이 주식회사 Image processing unit and display device using the same, and image processing method
CN108961299A (en) * 2017-05-18 2018-12-07 北京金山云网络技术有限公司 A kind of foreground image preparation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408942A (en) * 2008-04-17 2009-04-15 浙江师范大学 Method for locating license plate under a complicated background
US20090278989A1 (en) * 2008-05-11 2009-11-12 Cheon-Ho Bae Sharpness enhancing apparatus and method
KR20120058851A (en) * 2010-11-30 2012-06-08 엘지디스플레이 주식회사 Image processing unit and display device using the same, and image processing method
CN108961299A (en) * 2017-05-18 2018-12-07 北京金山云网络技术有限公司 A kind of foreground image preparation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MANOJ D. CHAUDHARY等: "《Multi-feature histogram intersection for Efficient Content Based Image Retrieval》", 《2014 INTERNATIONAL CONFERENCE ON CIRCUIT, POWER AND COMPUTING TECHNOLOGIES》 *
王枚等: "《融合边缘检测与HSV颜色特征的车牌定位技术》", 《计算机应用研究》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021087773A1 (en) * 2019-11-05 2021-05-14 深圳市欢太科技有限公司 Recognition method and apparatus, electronic device, and storage medium
CN111160215A (en) * 2019-12-25 2020-05-15 Tcl华星光电技术有限公司 Image identifier extraction device, and brightness adjustment method and device of image identifier
CN111160215B (en) * 2019-12-25 2024-01-12 Tcl华星光电技术有限公司 Brightness regulator and method for image mark
CN113688849A (en) * 2021-08-30 2021-11-23 中国空空导弹研究院 Gray level image sequence feature extraction method for convolutional neural network
CN113688849B (en) * 2021-08-30 2023-10-24 中国空空导弹研究院 Gray image sequence feature extraction method for convolutional neural network
CN113923513A (en) * 2021-09-08 2022-01-11 浙江大华技术股份有限公司 Video processing method and device
CN113923513B (en) * 2021-09-08 2024-05-28 浙江大华技术股份有限公司 Video processing method and device

Also Published As

Publication number Publication date
CN110111347B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN110111347A (en) Logos extracting method, device and storage medium
CN110246108B (en) Image processing method, device and computer readable storage medium
US10372226B2 (en) Visual language for human computer interfaces
CN104834933B (en) A kind of detection method and device in saliency region
CN109635627A (en) Pictorial information extracting method, device, computer equipment and storage medium
CN103942755B (en) Brightness of image adjusting method and device
CN107798661B (en) Self-adaptive image enhancement method
CN103366390B (en) terminal and image processing method and device
US20160293138A1 (en) Image processing method, image processing apparatus and display device
CN109978777B (en) Image brightness adjusting method and device
CN104794479B (en) This Chinese detection method of natural scene picture based on the transformation of local stroke width
CN105631417A (en) Video beautification system and method applied to Internet video live broadcast
CN101299267A (en) Method and device for processing human face image
RU2008143205A (en) EFFICIENT CODING OF MANY SPECIES
CN108074241B (en) Quality scoring method and device for target image, terminal and storage medium
CN103841410B (en) Based on half reference video QoE objective evaluation method of image feature information
CN109190617A (en) A kind of rectangle detection method of image, device and storage medium
CN107610675A (en) A kind of image processing method and device based on dynamic level
CN106683043B (en) Parallel image splicing method and device of multi-channel optical detection system
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
Lecca et al. An image contrast measure based on Retinex principles
Wang et al. A new method estimating linear gaussian filter kernel by image PRNU noise
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
Barai et al. Human visual system inspired saliency guided edge preserving tone-mapping for high dynamic range imaging
CN110633705A (en) Low-illumination imaging license plate recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province

Applicant after: TCL China Star Optoelectronics Technology Co.,Ltd.

Address before: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen China Star Optoelectronics Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant