CN107527015B - Human eye video positioning method and system based on skin color detection - Google Patents
Human eye video positioning method and system based on skin color detection Download PDFInfo
- Publication number
- CN107527015B CN107527015B CN201710600050.7A CN201710600050A CN107527015B CN 107527015 B CN107527015 B CN 107527015B CN 201710600050 A CN201710600050 A CN 201710600050A CN 107527015 B CN107527015 B CN 107527015B
- Authority
- CN
- China
- Prior art keywords
- block
- skin color
- human eye
- note
- judging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides a human eye video positioning method and system based on skin color detection. The invention relates to the technical field of image processing, and the method designs a human eye positioning technology by utilizing skin color detection on one hand; on the other hand, the human eye positioning of the related image frame in the video is determined through the information of the video compression domain, so that the timeliness of the human eye video positioning technology is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a human eye video positioning method and system based on skin color detection.
Background
With the rapid development of multimedia technology and computer network technology, video is becoming one of the mainstream carriers for information dissemination. The accurate and rapid human eye positioning technology can enhance the effect of twice with half the effort no matter the human face video retrieval or the online video beautifying is carried out. At present, the mainstream special human eye image positioning technology has large calculated amount, and restricts the online use and secondary development efficiency of the algorithm. In addition, when the eye location technique is applied to video, the time correlation of the video is not utilized, and only the vertical extension of image processing is performed, which further reduces the algorithm implementation efficiency.
Disclosure of Invention
The embodiment of the invention aims to provide a human eye video positioning method based on skin color detection, and aims to solve the problems that in the prior art, when the human eye positioning technology is applied to a video, the time correlation of the video is not utilized, only the longitudinal extension of image processing is carried out, and the implementation efficiency is low.
The embodiment of the invention is realized in such a way that a human eye video positioning method based on skin color detection comprises the following steps:
step 0: let t equal to 1, t represents a frame sequence number;
step 1: decoding a current video frame to obtain a decoded image;
step 2: setting a corresponding skin color identifier for each block in the current frame;
step 3: judging whether the skin color identifiers of all the blocks of the current frame are 0 or not, and entering Step 6; otherwise, go to Step 4;
step 4: searching a pending area of human eyes in the current frame, and setting a corresponding judgment mode;
step 5: positioning and marking human eyes according to a judging mode;
step 6: if the next frame of the current search video exists, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering Step 7; otherwise, ending;
step 7: if not sbkt-1If (i, j) is 1, go to Step 8; otherwise, entering Step 10;
sbkt-1(i, j) denotes a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
step 8: if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw);
wherein pictRepresents the tth frame of the video, also called the current frame; tptSwitching parameters for the scene; bkw and bkh respectively represent the column number and the row number of a frame of image in a unit of block after the frame of image is divided into blocks; sum (variable) denotes summing the variables; condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; bkt(i, j) denotes pictRow i, row j decoding block of (1);
step 9: if tptFirst, all sbk are set to 0t(i, j) ═ 0, then proceed to Step 6; otherwise, if tptEntering Step1 when the pressure is not less than 0.9 × bkh × bkw; otherwise, go to Step 10; sbk thereint(i, j) denotes a block bkt(i, j) eye identification parameters;
step 10: if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, scratch in NOTA skin color decision block region;
step 11: setting a corresponding skin color identifier for each block in the skin color judging area;
step 12: for a block of the non-skin color judgment area, identifying the current block according to the parameters of the reference block; and then proceeds to Step 4.
Another objective of an embodiment of the present invention is to provide an eye video positioning system based on skin color detection. The system comprises:
a frame serial number initialization module, configured to make t equal to 1, where t represents a frame serial number;
the decoding module is used for decoding the current video frame and acquiring a decoded image;
the block skin color identifier setting module of the current frame is used for setting a corresponding skin color identifier for each block in the current frame;
the method specifically comprises the following steps: judging whether each block in the current frame is a skin color block by using a skin color judging method which is disclosed in the industry and takes the block as a unit, namely if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein, bkt(i, j) denotes pictRow i, row j decoding block of (1); bkw and bkh respectively represent the column number and row number of the image in block unit, notet(i, j) represents the current frame pictThe skin tone identifier of the ith row of (a) and the jth block of (b);
the skin color identifier judging module is used for judging whether the skin color identifiers of all the blocks of the current frame are 0 or not, and entering the next frame judging and processing module; otherwise, entering a judging mode setting device of a pending area of human eyes;
the device comprises a human eye undetermined area judging mode setting device, a judging mode setting device and a judging mode setting device, wherein the human eye undetermined area judging mode setting device is used for searching a human eye undetermined area in a current frame and setting a corresponding judging mode;
the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode;
the next frame judgment processing module is used for judging whether a next frame of the current search video exists or not, if so, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering the human eye identification parameter judgment module; otherwise, ending;
a human eye identification parameter judgment module for judging if there is not sbkt-1If (i, j) is 1, entering an intra-frame prediction frame judgment processing module; otherwise, entering a skin color and non-skin color judging area dividing module;
sbkt-1(i, j) denotes a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
an intra-frame prediction frame judgment processing module for judging if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw);
wherein, condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; tptSwitching parameters for the scene; pictRepresents the tth frame of the video, also called the current frame;
a scene switching parameter judgment processing module for judging if tptFirst, all sbk are set to 0t(i, j) is equal to 0, and then the next frame judgment processing module is started; otherwise, if tptEntering a decoding module if the number is more than or equal to 0.9 and bkh and bkw; otherwise, entering a skin color and non-skin color judging area dividing module;
a skin color and non-skin color determination region dividing module for determining if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, a non-skin color judging area is drawn;
the skin color identifier setting module is used for setting a corresponding skin color identifier for each block in the skin color judging area;
the method specifically comprises the following steps: judging whether each block in the skin color judging area is a skin color block or not by using a skin color judging method which is disclosed in the industry and takes the block as a unit, if bkt(i, j) if it is determined that the skin color block is a skin color block, setting the skin color identifier of the block to1, notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
A non-skin color identifier setting module, configured to identify a current block for a block of a non-skin color determination region according to a parameter of a reference block; then entering a judging mode setting device of a pending area of human eyes;
the method specifically comprises the following steps: if spbktWhen (i, j) is 1, sbk is sett(i, j) ═ 1; otherwise, sbk is sett(i, j) ═ 0; if snotetIf (i, j) is 1, note is sett(i, j) ═ 1; otherwise, set notet(i,j)=0;
Wherein, snotet(i, j) denotes bkt(ii) a skin tone identification parameter for the reference block of (i, j); spbkt(i, j) denotes bkt(ii) a human eye identification parameter of the reference block of (i, j).
The invention has the advantages of
The invention provides a human eye video positioning method based on skin color detection. On one hand, the method of the invention utilizes skin color detection to design a human eye positioning technology; on the other hand, the human eye positioning of the related image frame in the video is determined through the information of the video compression domain, so that the timeliness of the human eye video positioning technology is improved.
Drawings
FIG. 1 is a flow chart of a method for locating human eye video based on skin color detection according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart of the detailed method of Step4 in FIG. 1;
FIG. 3 is a flowchart of a detailed method of the side decision mode in Step5 in FIG. 1;
FIG. 4 is a block diagram of a human eye video positioning system based on skin tone detection in accordance with a preferred embodiment of the present invention;
FIG. 5 is a view showing a configuration of the eye predetermined region determining mode setting means in FIG. 4;
fig. 6 is a structural view of a side judgment device in the eye positioning and marking device of fig. 4.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples, and for convenience of description, only parts related to the examples of the present invention are shown. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a human eye video positioning method and system based on skin color detection. On one hand, the embodiment of the method designs a human eye positioning technology by utilizing skin color detection; on the other hand, the human eye positioning of the related image frame in the video is determined through the information of the video compression domain, so that the timeliness of the human eye video positioning technology is improved.
Example one
FIG. 1 is a flow chart of a method for locating human eye video based on skin color detection according to a preferred embodiment of the present invention; the method comprises the following steps:
step 0: let t equal to 1, pictRepresenting the tth frame of the video, also called the current frame, and t representing the frame sequence number.
Step 1: and decoding the current video frame to obtain a decoded image.
Step 2: setting a corresponding skin color identifier for each block in the current frame;
the method specifically comprises the following steps: judging whether each block in the current frame is a skin color block by using a skin color judging method which is disclosed in the industry and takes the block as a unit, namely if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
Wherein, bkt(i, j) denotes pictThe ith row and the jth decoded block (the size of the block is 16x16 (standard such as H264), 64x64(HEVC), when the block is further divided, these small blocks are called sub-blocks), bkw and bkh respectively represent the number of columns and rows of the image in units of blocks after the image is divided into blockst(i, j) represents the current frame pictRow i and block j.
Step 3: if the skin color identifiers of all the blocks of the current frame are 0, then Step6 is entered; otherwise, Step4 is entered.
Step 4: searching a pending area of human eyes in the current frame, and setting a corresponding judgment mode;
FIG. 2 is a flowchart of the detailed method of Step4 in FIG. 1; the method comprises the following steps:
step 41: firstly, whether a condition is met is searched for: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j-1), the block is first designated as sbkt(is, js), referred to as the human eye initiation decision block, and then proceeds to Step 42; otherwise, go to Step 6.
Wherein is and js are respectively the block sbktThe row and column numbers of (is, js); note (r) notet(i-1, j) represents the current frame pictLine i-1, block j; note (r) notet(i, j-1) represents the current frame pictThe skin tone identifier of the ith row of (1) block j;
step 42: then, the line where the human eye starts the decision block is searched for whether the condition is satisfied: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j +1), the block is first described as dbkt(id, jd), referred to as the human eye suspension decision block, then proceeds to Step43, if not to Step 44;
wherein id and jd are blocks dbk respectivelytThe row and column numbers of (id, jd); note (r) notet(i, j +1) ═ 1 denotes the current frame pictThe skin tone identifier of the ith row of (1) th block;
step43, firstly, fusing regions to be judged, namely merging adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of the human eye, merging adjacent non-skin color blocks of the human eye stopping decision block into a second region to be judged of the human eye, and setting a judging mode as a front judging mode; then go to Step 5;
step44, firstly, fusing regions to be judged, namely combining adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of the human eye, and then setting a judging mode as a side judging mode; then go to Step 5;
step 5: and (5) carrying out human eye positioning and marking according to the judging mode.
Side judgment mode:
FIG. 3 is a flowchart of a detailed method of the side decision mode in Step5 in FIG. 1; the method comprises the following steps:
step C1: calculating the brightness value distribution of the first region to be determined by human eyes
p(k)=sum(sign(yt(m,n)=k|yt(m, n) ∈ human eye defines a first region)).
Wherein p (k) identifies the distribution of luminance values k; sum (variable) denotes summing the variables; y ist(m, n) denotes pictThe luminance value of the mth row and the nth column;
step C2: and solving the maximum value and the secondary maximum value of the brightness value distribution of the first region to be determined by the human eyes, and finding out the corresponding brightness value.
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k))。
Wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) The maximum value of a variable satisfying the condition is shown, and max (variable) represents the maximum value of the variable.
Step C3: if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and identifying all blocks in the region as the human eye, otherwise, identifying the first region as the non-human eye.
Namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition),
wherein the human eye identification condition is as follows:abs(kmax1-kmax2)>thres and bkt(i, j) ∈ A first region to be determined by the human eye, wherein sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; abs (variable) means the absolute value of a variable.
A front determination mode: and respectively carrying out primary side judgment on the first region to be determined by the human eyes and the second region to be determined by the human eyes, and marking corresponding results.
Step 6: if the next frame of the current search video exists, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering Step 7; otherwise, ending.
Step 7: if not sbkt-1If (i, j) is 1, go to Step 8; otherwise, go to Step 10.
sbkt-1(i, j) denotes a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
step 8: if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw).
Wherein, condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; tptA scene change parameter.
Step 9: if tptFirst, all sbk are set to 0t(i, j) ═ 0, then proceed to Step 6; otherwise, if tptEntering Step1 when the pressure is not less than 0.9 × bkh × bkw; otherwise, Step10 is entered.
Step 10: if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, the non-skin color judging area is drawn.
Step 11: setting a corresponding skin color identifier for each block in the skin color judging area;
the method specifically comprises the following steps: judging each skin color in the skin color judging area by using the skin color judging method which is disclosed in the industry and takes blocks as unitsWhether a block is a skin tone block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
Step 12: for a block of the non-skin color judgment area, identifying the current block according to the parameters of the reference block; and then proceeds to Step 4.
The method specifically comprises the following steps: if spbktWhen (i, j) is 1, sbk is sett(i, j) ═ 1; otherwise, sbk is sett(i, j) ═ 0; if snotetIf (i, j) is 1, note is sett(i, j) ═ 1; otherwise, set notet(i,j)=0。
Wherein, snotet(i, j) denotes bkt(ii) a skin tone identification parameter for the reference block of (i, j); spbkt(i, j) denotes bkt(ii) a human eye identification parameter of the reference block of (i, j).
Example two
FIG. 4 is a block diagram of a human eye video positioning system based on skin tone detection in accordance with a preferred embodiment of the present invention; the system comprises:
a frame sequence number initialization module for setting t to 1, pictRepresents the tth frame of the video, also called the current frame, and t represents the frame sequence number;
the decoding module is used for decoding the current video frame and acquiring a decoded image;
the block skin color identifier setting module of the current frame is used for setting a corresponding skin color identifier for each block in the current frame;
the method specifically comprises the following steps: judging whether each block in the current frame is a skin color block by using a skin color judging method which is disclosed in the industry and takes the block as a unit, namely if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
Wherein, bkt(i, j) denotes pictThe ith row and the jth block of (1) are decoded (the size of the block is 16x16 (standard such as H264), 64x64(HEVC), when the block is further divided, these small blocks are called sub-blocks), bkw and bkh respectively represent the number of columns and rows of the image in units of blocks after the image of one frame is divided into blocksNumber, notet(i, j) represents the current frame pictRow i and block j.
The skin color identifier judging module is used for judging whether the skin color identifiers of all the blocks of the current frame are 0 or not, and entering the next frame judging and processing module; otherwise, entering a judging mode setting device of the undetermined area of the human eyes.
The device comprises a human eye undetermined area judging mode setting device, a judging mode setting device and a judging mode setting device, wherein the human eye undetermined area judging mode setting device is used for searching a human eye undetermined area in a current frame and setting a corresponding judging mode;
and the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode.
The next frame judgment processing module is used for judging whether a next frame of the current search video exists or not, if so, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering the human eye identification parameter judgment module; otherwise, ending.
A human eye identification parameter judgment module for judging if there is not sbkt-1If (i, j) is 1, entering an intra-frame prediction frame judgment processing module; otherwise, entering a skin color and non-skin color judging area dividing module.
sbkt-1(i, j) denotes a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
an intra-frame prediction frame judgment processing module for judging if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw).
Wherein, condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; tptA scene change parameter.
A scene switching parameter judgment processing module for judging if tptFirst, all sbk are set to 0t(i, j) is equal to 0, and then the next frame judgment processing module is started; otherwise, if tptNot less than 0.9 × bkh × bkw, thenEntering a decoding module; otherwise, entering a skin color and non-skin color judging area dividing module.
A skin color and non-skin color determination region dividing module for determining if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, the non-skin color judging area is drawn.
The skin color identifier setting module is used for setting a corresponding skin color identifier for each block in the skin color judging area;
the method specifically comprises the following steps: judging whether each block in the skin color judging area is a skin color block or not by using a skin color judging method which is disclosed in the industry and takes the block as a unit, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0。
A non-skin color identifier setting module, configured to identify a current block for a block of a non-skin color determination region according to a parameter of a reference block; then entering a judging mode setting device of a pending area of human eyes.
The method specifically comprises the following steps: if spbktWhen (i, j) is 1, sbk is sett(i, j) ═ 1; otherwise, sbk is sett(i, j) ═ 0; if snotetIf (i, j) is 1, note is sett(i, j) ═ 1; otherwise, set notet(i,j)=0。
Wherein, snotet(i, j) denotes bkt(ii) a skin tone identification parameter for the reference block of (i, j); spbkt(i, j) denotes bkt(ii) a human eye identification parameter of the reference block of (i, j).
Further, fig. 5 is a structural view of the eye predetermined region determination mode setting apparatus of fig. 4; the device comprises:
the human eye starting decision block searching and judging module is used for searching whether the conditions are met: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j-1), the block is first designated as sbkt(is, js) called a human eye initiation decision block, and then entering a human eye termination decision block search judgment module; if not, entering the next frame judgment processing module.
Wherein is and js are respectively the block sbktThe row and column numbers of (is, js); note (r) notet(i-1, j) represents the current frame pictLine i-1, block j; note (r) notet(i, j-1) represents the current frame pictThe skin tone identifier of the ith row of (1) block j;
a human eye termination decision block search judging module for searching whether a line where the human eye termination decision block exists satisfies a condition: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j +1), the block is first described as dbkt(id, jd), called the human eye suspension decision block, and then entering a front determination mode setting module, if not, entering a side determination mode setting module;
wherein id and jd are blocks dbk respectivelytThe row and column numbers of (id, jd); note (r) notet(i, j +1) ═ 1 denotes the current frame pictThe skin tone identifier of the ith row of (1) th block;
the front judging mode setting module is used for firstly carrying out fusion of regions to be judged, namely combining adjacent non-skin color blocks of a human eye starting decision block into a first region to be judged of human eyes, combining adjacent non-skin color blocks of a human eye stopping decision block into a second region to be judged of human eyes, and then setting a judging mode as a front judging mode; then entering a human eye positioning and marking device;
the side judgment mode setting module is used for firstly fusing the regions to be judged, namely combining the adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged for the human eye, and then setting the judgment mode as a side judgment mode; then entering a human eye positioning and marking device;
further, the human eye positioning and marking device comprises a side judgment device and a front judgment device;
fig. 6 is a structural view of a side judgment device in the eye positioning and marking device of fig. 4. The side surface determination device includes:
a module for calculating the brightness value distribution of the first region to be determined for human eyes
Luminance value distribution p (k) sum (sign (y)t(m,n)=k|yt(m, n) ∈ human eye defines a first region)).
Wherein p (k) identifies the distribution of luminance values k; sum (variable) denotes summing the variables; y ist(m, n) denotes pictThe luminance value of the mth row and the nth column;
and the brightness value acquisition module is used for solving the maximum value and the secondary maximum value of the brightness value distribution of the first area to be determined by the human eyes and finding out the corresponding brightness value.
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k))。
Wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) The maximum value of a variable satisfying the condition is shown, and max (variable) represents the maximum value of the variable.
A first eye identification module for determining if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and identifying all blocks in the region as the human eye, otherwise, identifying the first region as the non-human eye.
Namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition),
wherein the human eye identification condition is as follows: abs (k)max1-kmax2)>Thres and bkt(i, j) ∈ A first region is to be defined by the human eye, wherein,
sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; abs (variable) means the absolute value of a variable.
The front judging device is used for respectively carrying out primary side judgment on the first region to be determined by human eyes and the second region to be determined by human eyes and marking corresponding results.
It will be understood by those skilled in the art that all or part of the steps in the method according to the above embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, such as ROM, RAM, magnetic disk, optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (7)
1. A human eye video positioning method based on skin color detection is characterized by comprising the following steps:
step 0: let t equal to 1, t represents a frame sequence number;
step 1: decoding a current video frame to obtain a decoded image;
step 2: setting a corresponding skin color identifier for each block in the current frame;
step 3: judging whether the skin color identifiers of all the blocks of the current frame are 0 or not, and entering Step 6; otherwise, go to Step 4;
step 4: searching a pending area of human eyes in the current frame, and setting a corresponding judgment mode;
step 5: positioning and marking human eyes according to a judging mode;
step 6: if the next frame of the current search video exists, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering Step 7; otherwise, ending;
step 7: if not sbkt-1If (i, j) is 1, go to Step 8; otherwise, entering Step 10;
sbkt-1(i,j) Representing a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
step 8: if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw);
wherein pictRepresents the tth frame of the video, also called the current frame; tptSwitching parameters for the scene; bkw and bkh respectively represent the column number and the row number of the image in units of blocks after one frame of image is divided into blocks; sum (variable) denotes summing the variables;condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; bkt(i, j) denotes pictRow i, row j decoding block of (1);
step 9: if tptFirst, all sbk are set to 0t(i, j) ═ 0, then proceed to Step 6; otherwise, if tptEntering Step1 when the pressure is not less than 0.9 × bkh × bkw; otherwise, go to Step 10; sbk thereint(i, j) denotes a block bkt(i, j) eye identification parameters;
step 10: if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, a non-skin color decision block area is scratched in;
step 11: setting a corresponding skin color identifier for each block in the skin color judging area;
step 12: for a block of the non-skin color judgment area, identifying the current block according to the parameters of the reference block; then entering Step 4;
the human eye positioning according to the judging mode and the identification are specifically as follows:
side judgment mode:
step C1: calculating the brightness value distribution of the first region to be determined by human eyes
p(k)=sum(sign(yt(m,n)=k|yt(m, n) ∈ humanA first region to be determined by the eye));
wherein p (k) represents the distribution of luminance values k; sum (variable) denotes summing the variables; y ist(m, n) denotes pictThe luminance value of the mth row and the nth column;
step C2: solving the maximum value and the sub-maximum value of the brightness value distribution of the first area to be determined by human eyes, and finding out the corresponding brightness value;
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k));
wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) Denotes the maximum value of variables satisfying the conditions, max: (Variables of) Expressing the maximum value of the variable;
step C3: if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and marking all blocks in the region as the human eye, otherwise, marking the first region as the non-human eye;
namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition),
wherein the human eye identification condition is as follows: abs (k)max1-kmax2)>Thres and bkt(i, j) ∈ human eye to define a first region, wherein sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; abs (variable) means the absolute value of a variable;
a front determination mode: and respectively carrying out primary side judgment on the first region to be determined by the human eyes and the second region to be determined by the human eyes, and marking corresponding results.
2. The method for human eye video localization based on skin color detection as claimed in claim 1,
said setting a corresponding skin tone identifier for each block in the current frame,
the method specifically comprises the following steps: the skin color judging method using block as unit judges whether each block in the current frame is a skin color block, namely if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein notet(i, j) represents the current frame pictRow i and block j.
3. The method for human eye video localization based on skin color detection as claimed in claim 1,
the method for searching the undetermined area of the human eyes in the current frame and setting the corresponding judgment mode specifically comprises the following steps:
step 41: firstly, whether a condition is met is searched for: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j-1), the block is first designated as sbkt(is, js), referred to as the human eye initiation decision block, and then proceeds to Step 42; if not, entering Step 6;
wherein is and js are respectively the block sbktThe row and column numbers of (is, js); note (r) notet(i-1, j) represents the current frame pictLine i-1, block j; note (r) notet(i, j-1) represents the current frame pictThe skin tone identifier of the ith row of (1) block j;
step 42: then, the line where the human eye starts the decision block is searched for whether the condition is satisfied: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j +1), the block is first described as dbkt(id, jd), referred to as the human eye suspension decision block, then proceeds to Step43, if not to Step 44;
wherein id and jd are blocks dbk respectivelytThe row and column numbers of (id, jd); note (r) notet(i, j +1) ═ 1 denotes the current frame pictThe skin tone identifier of the ith row of (1) th block;
step43, firstly, fusing regions to be judged, namely merging adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of the human eye, merging adjacent non-skin color blocks of the human eye stopping decision block into a second region to be judged of the human eye, and setting a judging mode as a front judging mode; then go to Step 5;
step44, firstly, fusing regions to be judged, namely combining adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged of the human eye, and then setting a judging mode as a side judging mode; then proceed to Step 5.
4. The method for human eye video localization based on skin color detection as claimed in claim 1,
the setting of the corresponding skin color identifier for each block in the skin color determination region specifically includes:
the skin color judging method using block as unit judges whether each block in the skin color judging area is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein notet(i, j) represents the current frame pictRow i and block j.
5. The method for human eye video localization based on skin color detection as claimed in claim 1,
the identifying, according to the parameter of the reference block, the current block for the non-skin color determination region specifically includes:
if spbktWhen (i, j) is 1, sbk is sett(i, j) ═ 1; otherwise, sbk is sett(i,j)=0;
If snotetIf (i, j) is 1, note is sett(i, j) ═ 1; otherwise, then setPut notet(i,j)=0;
Wherein, snotet(i, j) denotes bkt(ii) a skin tone identification parameter for the reference block of (i, j); spbkt(i, j) denotes bkt(ii) a human eye identification parameter of the reference block of (i, j);
wherein notet(i, j) represents the current frame pictRow i and block j.
6. A human eye video positioning system based on skin color detection, the system comprising:
a frame serial number initialization module, configured to make t equal to 1, where t represents a frame serial number;
the decoding module is used for decoding the current video frame and acquiring a decoded image;
the block skin color identifier setting module of the current frame is used for setting a corresponding skin color identifier for each block in the current frame;
the method specifically comprises the following steps: the skin color judging method using block as unit judges whether each block in the current frame is a skin color block, namely if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
Wherein, bkt(i, j) denotes pictRow i, row j decoding block of (1); bkw and bkh respectively represent the column number and row number of the image in block unit after one frame of image is divided into blocks, and notet(i, j) represents the current frame pictThe skin tone identifier of the ith row of (a) and the jth block of (b);
the skin color identifier judging module is used for judging whether the skin color identifiers of all the blocks of the current frame are 0 or not, and entering the next frame judging and processing module; otherwise, entering a judging mode setting device of a pending area of human eyes;
the device comprises a human eye undetermined area judging mode setting device, a judging mode setting device and a judging mode setting device, wherein the human eye undetermined area judging mode setting device is used for searching a human eye undetermined area in a current frame and setting a corresponding judging mode;
the human eye positioning and marking device is used for positioning and marking human eyes according to the judging mode;
the next frame judgment processing module is used for judging whether a next frame of the current search video exists or not, if so, making t equal to t +1, setting the next frame of the current search video as the current frame of the current search video, and then entering the human eye identification parameter judgment module; otherwise, ending;
a human eye identification parameter judgment module for judging if there is not sbkt-1If (i, j) is 1, entering an intra-frame prediction frame judgment processing module; otherwise, entering a skin color and non-skin color judging area dividing module;
sbkt-1(i, j) denotes a block bkt-1(i, j) eye identification parameters; bkt-1(i, j) denotes pict-1Row i, row j decoding block of (1); pict-1Represents the t-1 th frame of the video;
an intra-frame prediction frame judgment processing module for judging if pictFor intra-predicted frames, let tptBkh × bkw; otherwise, calculate tpt=sum(sign(bkt(i, j) | Condition 2) |1 ≦ i ≦ bkh and 1 ≦ j ≦ bkw);
wherein p (k) represents the distribution of luminance values k; sum (variable) denotes summing the variables; y ist(m, n) denotes pictThe luminance value of the mth row and the nth column;wherein, condition 2 represents: bkt(i, j) is an intra prediction block or at least comprises one intra prediction sub-block; tptSwitching parameters for the scene; pictRepresents the tth frame of the video, also called the current frame;
a scene switching parameter judgment processing module for judging if tptFirst, all sbk are set to 0t(i, j) is equal to 0, and then the next frame judgment processing module is started; otherwise, if tptEntering a decoding module if the number is more than or equal to 0.9 and bkh and bkw; otherwise, entering a skin color and non-skin color judging area dividing module;
a skin color and non-skin color determination region dividing module for determining if bkt(i, j) if the block is an intra-frame prediction block, decoding the block, and then delimiting the block as a skin color judgment area; otherwise, a non-skin color judging area is drawn;
the skin color identifier setting module is used for setting a corresponding skin color identifier for each block in the skin color judging area;
the method specifically comprises the following steps: the skin color judging method using block as unit judges whether each block in the skin color judging area is a skin color block, if bkt(i, j) if the skin color block is determined, setting the skin color identifier of the block to be 1, namely notet(i, j) ═ 1; otherwise, note is sett(i,j)=0;
A non-skin color identifier setting module, configured to identify a current block for a block of a non-skin color determination region according to a parameter of a reference block; then entering a judging mode setting device of a pending area of human eyes;
the method specifically comprises the following steps: if spbktWhen (i, j) is 1, sbk is sett(i, j) ═ 1; otherwise, sbk is sett(i, j) ═ 0; if snotetIf (i, j) is 1, note is sett(i, j) ═ 1; otherwise, set notet(i, j) ═ 0; wherein, snotet(i, j) denotes bkt(ii) a skin tone identification parameter for the reference block of (i, j); spbkt(i, j) denotes bkt(ii) a human eye identification parameter of the reference block of (i, j);
the human eye positioning and marking device comprises a side judgment device and a front judgment device;
the side surface determination device includes:
a module for calculating the brightness value distribution of the first region to be determined for human eyes, which is used for calculating the brightness value distribution p (k) sum (y) of the first region to be determined for human eyest(m,n)=k|yt(m, n) ∈ a first region to be defined by the human eye));
wherein p (k) represents the distribution of luminance values k; sum (variable) denotes summing the variables; y ist(m, n) denotes pictThe luminance value of the mth row and the nth column;
the brightness value acquisition module is used for solving the maximum value and the sub-maximum value of the brightness value distribution of the first area to be determined by human eyes and finding out the corresponding brightness value;
perk1(k)=max(p(k))、kmax1=arg(k|perk1(k))、
perk2(k)=max(p(k)|p(k)≠perk1(k))、kmax2=arg(k|perk2(k));
wherein perk1(k), kmax1Brightness values respectively representing the maximum value of the brightness value distribution and the corresponding maximum value of the brightness value distribution; perk2(k), kmax2Luminance values respectively representing a sub-maximum value of the luminance value distribution and corresponding to the sub-maximum value of the luminance value distribution; k is a radical ofmax1Arg (k | perk1(k)) means that perk1(k) is first obtained, and then the value of k corresponding to perk1(k) is assigned to kmax1,kmax2Arg (k | perk2(k)) means that perk2(k) is first obtained, and then the value of k corresponding to perk2(k) is assigned to kmax2;max(Variables of|Condition) Denotes the maximum value of variables satisfying the conditions, max: (Variables of) Expressing the maximum value of the variable;
a first eye identification module for determining if abs (k)max1-kmax2)>Thres, judging that the first region to be determined by the human eye is the human eye, and marking all blocks in the region as the human eye, otherwise, marking the first region as the non-human eye;
namely sbkt(i,j)=sign(bkt(i, j) | human eye identification condition),
wherein the human eye identification condition is as follows: abs (k)max1-kmax2)>Thres and bkt(i, j) ∈ human eye to define a first region, sbkt(i, j) denotes a block bkt(i, j) eye identification parameters; abs (Variables of) Representing the absolute value of a variable;
the front judging device is used for respectively carrying out primary side judgment on the first region to be determined by human eyes and the second region to be determined by human eyes and marking corresponding results.
7. The human eye video location system based on skin color detection of claim 6,
the human eye undetermined area determination mode setting device comprises:
the human eye starting decision block searching and judging module is used for searching whether the conditions are met:notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j-1), the block is first designated as sbkt(is, js) called a human eye initiation decision block, and then entering a human eye termination decision block search judgment module; if not, entering a next frame judgment processing module;
wherein is and js are respectively the block sbktThe row and column numbers of (is, js); note (r) notet(i-1, j) represents the current frame pictLine i-1, block j; note (r) notet(i, j-1) represents the current frame pictThe skin tone identifier of the ith row of (1) block j;
a human eye termination decision block search judging module for searching whether a line where the human eye termination decision block exists satisfies a condition: note (r) notet(i, j) ═ 0 and notet(i-1, j) ═ 1 and notetIf a block is 1 (i, j +1), the block is first described as dbkt(id, jd), called the human eye suspension decision block, and then entering a front determination mode setting module, if not, entering a side determination mode setting module; wherein id and jd are blocks dbk respectivelytThe row and column numbers of (id, jd); note (r) notet(i, j +1) ═ 1 denotes the current frame pictThe skin tone identifier of the ith row of (1) th block;
the front judging mode setting module is used for firstly carrying out fusion of regions to be judged, namely combining adjacent non-skin color blocks of a human eye starting decision block into a first region to be judged of human eyes, combining adjacent non-skin color blocks of a human eye stopping decision block into a second region to be judged of human eyes, and then setting a judging mode as a front judging mode; then entering a human eye positioning and marking device;
the side judgment mode setting module is used for firstly fusing the regions to be judged, namely combining the adjacent non-skin color blocks of the human eye starting decision block into a first region to be judged for the human eye, and then setting the judgment mode as a side judgment mode; then enters a human eye positioning and marking device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710600050.7A CN107527015B (en) | 2017-07-21 | 2017-07-21 | Human eye video positioning method and system based on skin color detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710600050.7A CN107527015B (en) | 2017-07-21 | 2017-07-21 | Human eye video positioning method and system based on skin color detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107527015A CN107527015A (en) | 2017-12-29 |
CN107527015B true CN107527015B (en) | 2020-08-04 |
Family
ID=60748365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710600050.7A Active CN107527015B (en) | 2017-07-21 | 2017-07-21 | Human eye video positioning method and system based on skin color detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107527015B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765503B (en) * | 2018-05-21 | 2020-11-13 | 深圳市梦网科技发展有限公司 | Skin color detection method, device and terminal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799868A (en) * | 2012-07-10 | 2012-11-28 | 吉林禹硕动漫游戏科技股份有限公司 | Method for identifying key facial expressions of human faces |
CN105787427A (en) * | 2016-01-08 | 2016-07-20 | 上海交通大学 | Lip area positioning method |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
CN106611043A (en) * | 2016-11-16 | 2017-05-03 | 深圳百科信息技术有限公司 | Video searching method and system |
CN106682094A (en) * | 2016-12-01 | 2017-05-17 | 深圳百科信息技术有限公司 | Human face video retrieval method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101046677B1 (en) * | 2011-03-15 | 2011-07-06 | 동국대학교 산학협력단 | Methods for tracking position of eyes and medical head lamp using thereof |
-
2017
- 2017-07-21 CN CN201710600050.7A patent/CN107527015B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799868A (en) * | 2012-07-10 | 2012-11-28 | 吉林禹硕动漫游戏科技股份有限公司 | Method for identifying key facial expressions of human faces |
CN105787427A (en) * | 2016-01-08 | 2016-07-20 | 上海交通大学 | Lip area positioning method |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
CN106611043A (en) * | 2016-11-16 | 2017-05-03 | 深圳百科信息技术有限公司 | Video searching method and system |
CN106682094A (en) * | 2016-12-01 | 2017-05-17 | 深圳百科信息技术有限公司 | Human face video retrieval method and system |
Non-Patent Citations (1)
Title |
---|
"基于肤色的人脸检测和性别识别的研究";姚锡钢;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20061215(第12期);第1-3页和第6-9页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107527015A (en) | 2017-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4725690B2 (en) | Video identifier extraction device | |
Huang et al. | Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems | |
CN107481222B (en) | Rapid eye and lip video positioning method and system based on skin color detection | |
WO2017107188A1 (en) | Method and apparatus for rapidly recognizing video classification | |
CN107371022B (en) | Inter-frame coding unit rapid dividing method applied to HEVC medical image lossless coding | |
CN106682094B (en) | Face video retrieval method and system | |
Chao et al. | A novel rate control framework for SIFT/SURF feature preservation in H. 264/AVC video compression | |
CN107506691B (en) | Lip positioning method and system based on skin color detection | |
CN111160295A (en) | Video pedestrian re-identification method based on region guidance and space-time attention | |
EP3438883A1 (en) | Method of processing moving picture and apparatus thereof | |
Soh et al. | Reduction of video compression artifacts based on deep temporal networks | |
CN107563278B (en) | Rapid eye and lip positioning method and system based on skin color detection | |
US20120237126A1 (en) | Apparatus and method for determining characteristic of motion picture | |
CN111008608A (en) | Night vehicle detection method based on deep learning | |
CN107516067B (en) | Human eye positioning method and system based on skin color detection | |
CN109492545B (en) | Scene and compressed information-based facial feature positioning method and system | |
CN106611043B (en) | Video searching method and system | |
CN107527015B (en) | Human eye video positioning method and system based on skin color detection | |
CN106664404A (en) | Block segmentation mode processing method in video coding and relevant apparatus | |
CN107423704B (en) | Lip video positioning method and system based on skin color detection | |
CN105992012B (en) | Error concealment method and device | |
CN107509074B (en) | Self-adaptive 3D video compression coding and decoding method based on compressed sensing | |
CN115239551A (en) | Video enhancement method and device | |
CN106572312B (en) | Panoramic video self-adaptive illumination compensation method and system | |
JP2011008508A (en) | Significant information extraction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 518057 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30 Applicant after: Shenzhen mengwang video Co., Ltd Address before: 518057 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the four Longtaili Technology Building Room 325 No. 30 Applicant before: SHENZHEN MONTNETS ENCYCLOPEDIA INFORMATION TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |