CN103049746A - Method for detecting fighting behavior based on face identification - Google Patents

Method for detecting fighting behavior based on face identification Download PDF

Info

Publication number
CN103049746A
CN103049746A CN2012105877398A CN201210587739A CN103049746A CN 103049746 A CN103049746 A CN 103049746A CN 2012105877398 A CN2012105877398 A CN 2012105877398A CN 201210587739 A CN201210587739 A CN 201210587739A CN 103049746 A CN103049746 A CN 103049746A
Authority
CN
China
Prior art keywords
human body
image
body contour
contour outline
profile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105877398A
Other languages
Chinese (zh)
Other versions
CN103049746B (en
Inventor
刘忠轩
杨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Letter frame technology (Beijing) Co., Ltd.
Original Assignee
XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd filed Critical XINZHENG ELECTRONIC TECHNOLOGY (BEIJING) Co Ltd
Priority to CN201210587739.8A priority Critical patent/CN103049746B/en
Publication of CN103049746A publication Critical patent/CN103049746A/en
Application granted granted Critical
Publication of CN103049746B publication Critical patent/CN103049746B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting a fighting behavior based on face identification. The method comprises the following steps of detecting a human body outline in each frame image; when determining that the distance between any two human body outlines in images is less than a threshold value, respectively detecting the skin color of each face in the images; and if the face skin color of one person is bruised or purple and the bruised or purple area is not less than a threshold value, determining that the fighting behavior exists. A fighting process is determined according to the distance between any two human body outlines and the skin color of the faces of human bodies without detecting by the eyes of a user, so that the phenomenon that fighting cannot be timely known is reduced.

Description

Based on the fight method of behavior of the detection of face recognition
Technical field
The present invention relates to safety-security area, in particular to the fight method of behavior of a kind of detection based on face recognition.
Background technology
At present in safety-security area, detect event in the current region by camera, such as: detect the human or animal of the activity in the current region etc.
Existing detection technique can only photographic images content, can not the content of image be further analyzed.Occur personnel's phenomenon of fighting in image, the user can only could determine by behind the eyes observe and decide, if do not see at that time, then these behavior meetings are left in the basket.
Summary of the invention
The present invention aims to provide the fight method of behavior of a kind of detection based on face recognition, to solve the uncared-for problem of the phenomenon of fighting that occurs in the above-mentioned image.
In an embodiment of the present invention, the method that provides a kind of detection based on face recognition to fight behavior comprises:
Detect the human body contour outline in every two field picture;
Determine in the image distance between any two human body contour outlines less than first threshold, then the skin color of everyone face in the detected image respectively;
Be not less than Second Threshold if silt green grass or young crops or purple and area appear in one of them people's skin of face color, then determine to occur the behavior of fighting.
Method of the present invention is determined the process of fighting by the distance between any two human body contour outlines and the color of human body face skin, does not need the user to use eye detection, has reduced the situation that the phenomenon of fighting can not in time be found.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 shows the process flow diagram of embodiment;
Fig. 2 shows the process flow diagram of testing process among the embodiment;
Fig. 3 shows the background image among the embodiment;
Fig. 4 shows the present image among the embodiment;
Fig. 5 shows difference image among the embodiment;
Fig. 6 shows the schematic diagram of selecting point among the embodiment;
Fig. 7 shows the schematic diagram of the profile that obtains among the embodiment;
Fig. 8 shows human body contour outline that the embodiment sorter separates and the schematic diagram of other subject image on every side;
Fig. 9 shows the schematic diagram of corrosion process among the embodiment;
Figure 10 shows the schematic diagram of expansion process among the embodiment.
Embodiment
Below with reference to the accompanying drawings and in conjunction with the embodiments, describe the present invention in detail.
Referring to Fig. 1, the step among the embodiment comprises:
S11: detect the human body contour outline in every two field picture;
S12: determine in the image distance between any two human body contour outlines less than first threshold, then the skin color of everyone face in the detected image respectively;
First threshold can be set according to the quantity of pixel.Also can proportionally determine, for example: distance lives in the 1.3-1.7 times of the minimum rectangle width of human body contour outline for frame
S13: be not less than Second Threshold if silt green grass or young crops or purple and area appear in one of them people's skin of face color, then determine to occur the behavior of fighting.
Method among the embodiment, the color of the skin of face by the distance between any two human body contour outlines and human body is determined the process of fighting, and does not need the user to use eye detection, has reduced the situation that the phenomenon of fighting can not in time be found.
Preferably, referring to Fig. 2, the process of the described detection among the embodiment comprises:
S21: the image binaryzation with present frame obtains difference image;
Get as a setting image of colored previous frame image shown in Figure 3, from the second color image frame shown in Figure 4, present image and background image simple subtraction taken absolute value and binaryzation obtains difference image shown in Figure 5--d (i, j).Each two field picture and background image are done difference at three passages of colour.To each pixel, if the maximal value of the difference result on these three passages greater than a certain threshold value, then the value assignment on gray level image with this point is 255.Otherwise assignment is 0.
Figure BDA00002685076200041
S22: the pixel in the described difference image of lining by line scan if the pixel that scans is the white pixel point, then according to the gray scale of neighbor pixel, traverses the profile of the closed region that is made of a plurality of white pixel points;
Can adopt the edge following algorithm based on connectedness, obtain to extract the profile of pedestrian in the whole image sequence.The form storage of profile with point sequence.
Point on the outline line, the gray-scale value that is adjacent a little has certain jump, therefore the contrast by gray-scale value just can extract these points.Referring to Fig. 6, for simply, removed all picture elements on the framing mask, the picture element A that each is extracted makes comparisons with 8 points around it, and when 8 reference point around certain point have one when identical not all with it, this point is exactly point.
Edge following algorithm is selected first a starting point s ∈ S c, then along utilizing connective lock-on boundary until get back to starting point clockwise or counterclockwise.
Known pixels p, q ∈ S, if there is a path from p to q, and the whole pixels on the path are included among the S, then claim p to be communicated with q.The pixel that obtains at difference image as shown in Figure 7.
Connective (connectivity) is relation of equivalence. to belonging to any three pixel p, q and the r of S, following character is arranged:
1) pixel p is communicated with (reflexivity) with p itself.
2) if p is communicated with q, then q is communicated with (interchangeability) with p.
3) if p is communicated with q and q is communicated with r, then p is communicated with (transitivity) with r.
S23: the minimum boundary rectangle of boundary pixel point of determining to comprise the profile of described closed region;
For the point sequence of an outline that finds out, calculate the minimum value and the maximal value X that have a few in this sequence in the horizontal and vertical directions Max, Y Min, X Max, Y MaxThen the upper left corner coordinate of boundary rectangle and wide height are (X Min, Y Min), width=X Max-X Min+ 1, height=Y Max-Y Min+ 1.
S24: adopt training set to identify the interior human body contour outline of described minimum boundary rectangle.
Circumscribed rectangular region is carried out human body contour outline based on the sorter SVM of the support vector machine of histogram of gradients feature HOG to be detected.
This sorter can train a classification plane, as shown in Figure 8, the human body image in the input picture and non-human body image can be distinguished.
The process that use support vector machine method is carried out human detection is as follows:
1) training: choose suitable kernel function, k(xi, xj).
2) minimize || w||, at ω i(wx i-b) 〉=1-ξ iCondition under.
3) only store the α of non-zero iWith corresponding x i(they are support vectors).
4) image is zoomed to different scale by a certain percentage, under each yardstick, use the window scan image of 64*128 size.And then the image under each window classified.
5) classification: for pattern X, use support vector x iWith corresponding weight α iThe computational discrimination functional expression
Figure BDA00002685076200061
The symbol of this function determines that this zone is human body.
6) wherein pattern X is the input human region.
7) the detection strategy for the treatment of surveyed area is from top to bottom, from left to right, to each 64*128 size window classify.
8) again image is dwindled, classify again, until zone to be detected is less than 64*128.
Preferably, in above-described embodiment, also comprise: the difference image among Fig. 5 is carried out morphology operations, the result of computing is carried out subsequent operation.
Carry out first the morphology opening operation for difference image and get rid of isolated point, noise, burr and foot bridge.Make again the human region of fracture up by closing operation of mathematical morphology.Then export bianry image as subsequent treatment.
The corrosion concept of general significance can referring to Fig. 9, be defined as: X corrodes with B, is expressed as:
E = XΘB = { x | ( B ) x ⊆ X }
Expansion can be regarded as the dual operations of corrosion, and referring to Figure 10, its definition is: the bar structure element B is done the mapping about initial point, obtains (B behind the translation a again V) aIf, (B V) aWith the common factor of X be not empty, we write down this B V aInitial point a, all satisfy set that a point of above-mentioned condition forms and are called the result that X is expanded by B.
Figure BDA00002685076200071
Corrosion and expansion are not reciprocal computings, so can cascade use.Corrode first the process that expands afterwards and be called opening operation.
The morphology opening operation is used for eliminating wisp, when very thin some place separates the border of object, level and smooth larger object and its area of not obvious change.
X opens with B and is expressed as:
Figure BDA00002685076200072
The process of post-etching of expanding first is called closed operation.Be used for filling tiny cavity in the object, connect adjacent object, smoothly when its border and its area of not obvious change.X comes closed with B, be expressed as: CLOSE ( X ) = X · B = ( X ⊕ B ) ΘB
Preferably, in above-described embodiment, detect everyone process of skin of face color and comprise:
Use detects based on the adaboost cascade classifier of the haar feature color to the skin of face of people in the image, obtains the rectangular area of each facial feature;
With the pixel projection of each rectangular area to the HSV space; Wherein, described rectangular area comprises:
Mx-0.2Mwidth, My, 0.2*Mwidth, Mheight and Mx+Mwidth, My, 0.2*Mwidth, Mheight;
Mx, My are the upper left corner coordinate of each rectangle, and Mwidth is that width, the Mheight of rectangle is the height of rectangle;
Add up the pixel that belongs to the predetermined color classification in each zone.
Preferably, among the embodiment, describedly determine that the process that the behavior of fighting occurs comprises:
Determine that value is between 340-360 on the H direction in each rectangular area, the number num of the pixel between 0-20 and the 160-200 is if any num/ (2*0.2Mwidth*Mheight)>thr1 then determines to occur the behavior of fighting; The chromatic value of pixel can be divided into 360 five equilibriums, distinguish different colors.
Thr1 is Second Threshold, and span is 0.28-0.40;
Or, if after the described num weighted sum in each rectangular area>thr1, then determine to occur the behavior of fighting;
Also comprise: trigger alarm.The inventor finds through after a large amount of testing processes, if Thr1 is lower than 0.28, reports by mistake more; If Thr1 is greater than 0.4 then be easy to can't detect.Therefore, when 0.28-0.40, effect is optimum.
Preferably, in above-described embodiment, for the human body contour outline in the difference image, because the adjacent two two field picture life periods in front and back are poor, in the mistiming, in the rear two field picture of taking, may there be the people who newly enters shooting area, be in the tracking image everyone, can be each human body contour outline of identifying in the image and set up a chained list, be used for its track at every two field picture of storage, such as the position record.
Preferably, among the embodiment, after each human body contour outline in the every two field picture of described detection, also comprise: the human body contour outline nearest with adjacent previous frame image middle distance compares, and determines whether to be same human body profile;
If so, then upgrade the motion track of this human body contour outline;
If not, then set up corresponding motion track for this human body contour outline.Can be according to each human body contour outline in the center of image as motion track.
Preferably, described determining whether comprises for the process of same human body profile:
If the area S that interweaves of definite two human body contour outlines in difference image CrossMin (S Pre, S Temp) * R then thinks same human body profile;
S wherein Cross=Width Cross* Height Cross,
Width cross=min(right pre,right temp)-max(left pre,left temp)
Height cross=min(Bottom pre,Bottom temp)-max(Top pre,Top temp);
Width CrossFor projecting to the length of the cross section on the horizontal direction;
Height CrossFor projecting to the length of the cross section on the vertical direction;
Right PreValue for the right margin of former frame profile;
Right TempValue for the right margin of present frame profile;
Left PreValue for the left margin of former frame profile;
Left TempValue for the left margin of present frame profile;
Bottom PreValue for the lower boundary of former frame profile;
Bottom TempValue for the lower boundary of present frame profile;
Top PreValue for the coboundary of former frame profile;
Top TempValue for the coboundary of present frame profile;
R=0.4, described R are cross-ratio.
Preferably, the process of the motion track of described this human body contour outline of renewal comprises:
The position coordinates of human body contour outline image in the present frame position coordinates with adjacent previous frame image is existed;
Described process for motion track corresponding to this human body contour outline foundation comprises:
Give ID for this human body contour outline, record the position coordinates of this human body contour outline image in present frame.
Preferably, in above-described embodiment, by following the trail of the mode of each human body contour outline, the number that exists in the document image, and can be after the situation of fighting occurring, in present image or history image with the obvious color marking personnel that fight, for example, with the colored rectangle receptacle frame personnel that firmly fight, owing to having recorded the position of human body contour outline in image, can in history image before, follow the trail of the personnel's that fight position.The facility that provides for follow-up search procedure.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the memory storage and be carried out by calculation element, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (9)

1. the detection based on the face recognition method of behavior of fighting is characterized in that, comprising:
Detect the human body contour outline in every two field picture;
Determine in the image distance between any two human body contour outlines less than first threshold, then the skin color of everyone face in the detected image respectively;
Be not less than Second Threshold if silt green grass or young crops or purple and area appear in one of them people's skin of face color, then determine to occur the behavior of fighting.
2. method according to claim 1 is characterized in that, the process of described detection comprises:
Absolute value binaryzation with image and the previous frame image subtraction of present frame obtains difference image;
Line by line scan pixel in the described difference image if the pixel that scans is the white pixel point, then according to the gray scale of neighbor pixel, traverses the profile of the closed region that is made of a plurality of white pixel points;
Determine to comprise the minimum boundary rectangle of boundary pixel point of the profile of described closed region;
Adopt training set to identify the interior human body contour outline of described minimum boundary rectangle.
3. method according to claim 2 is characterized in that, also comprises: described difference image is carried out morphology operations, the result of computing is carried out subsequent operation.
4. method according to claim 2 is characterized in that, the human body contour outline in the minimum boundary rectangle of described identification comprises:
Circumscribed rectangular region is carried out human body contour outline based on the sorter SVM of the support vector machine of histogram of gradients feature HOG to be detected.
5. method according to claim 4 is characterized in that, the process of the skin color of everyone face comprises in the described detection detected image:
Use detects based on the adaboost cascade classifier of the haar feature color to the skin of face of people in the image, obtains the rectangular area of each facial feature;
With the pixel projection of each rectangular area to the HSV space; Wherein, described rectangular area comprises:
Mx-0.2Mwidth, My, 0.2*Mwidth, Mheight and Mx+Mwidth, My, 0.2*Mwidth, Mheight;
Mx, My are the upper left corner coordinate of each rectangle, and Mwidth is that width, the Mheight of rectangle is the height of rectangle;
Add up the pixel that belongs to the predetermined color classification in each zone.
6. method according to claim 5 is characterized in that, describedly determines that the process that the behavior of fighting occurs comprises:
Determine that value is between 340-360 on the H direction in each rectangular area, the number num of the pixel between 0-20 and the 160-200 is if any num/ (2*0.2Mwidth*Mheight)>thr1 then determines to occur the behavior of fighting;
Thr1 is Second Threshold, and span is 0.28-0.40;
Or, if after the described num weighted sum in each rectangular area>thr1, then determine to occur the behavior of fighting;
Also comprise: trigger alarm.
7. method according to claim 1 or 5 is characterized in that, after each human body contour outline in the every two field picture of described detection, also comprises:
The human body contour outline nearest with adjacent previous frame image middle distance compares, and determines whether to be same human body profile;
If so, then upgrade the motion track of this human body contour outline;
If not, then set up corresponding motion track for this human body contour outline.
8. method according to claim 7 is characterized in that, described determining whether comprises for the process of same human body profile:
If determine the area S that interweaves to two human body contour outlines CrossMin (S Pre, S Temp) * R then thinks same human body profile;
S wherein Cross=Width Cross* Height Cross,
Width cross=min(right pre,right temp)-max(left pre,left temp)
Height cross=min(Bottom pre,Bottom temp)-max(Top pre,Top temp);
Width CrossFor projecting to the length of the cross section on the horizontal direction;
Height CrossFor projecting to the length of the cross section on the vertical direction;
Right PreValue for the right margin of former frame profile;
Right TempValue for the right margin of present frame profile;
Left PreValue for the left margin of former frame profile;
Lef TempValue for the left margin of present frame profile;
Bottom PreValue for the lower boundary of former frame profile;
Bottom TempValue for the lower boundary of present frame profile;
Top PreValue for the coboundary of former frame profile;
Top TempValue for the coboundary of present frame profile;
R=0.4, described R are cross-ratio.
9. method according to claim 7 is characterized in that, the process of the motion track of described this human body contour outline of renewal comprises:
The position coordinates of human body contour outline image in the present frame position coordinates with adjacent previous frame image is existed;
Described process for motion track corresponding to this human body contour outline foundation comprises:
Give ID for this human body contour outline, record the position coordinates of this human body contour outline image in present frame.
CN201210587739.8A 2012-12-30 2012-12-30 Detection based on face recognition is fought the method for behavior Expired - Fee Related CN103049746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210587739.8A CN103049746B (en) 2012-12-30 2012-12-30 Detection based on face recognition is fought the method for behavior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210587739.8A CN103049746B (en) 2012-12-30 2012-12-30 Detection based on face recognition is fought the method for behavior

Publications (2)

Publication Number Publication Date
CN103049746A true CN103049746A (en) 2013-04-17
CN103049746B CN103049746B (en) 2015-07-29

Family

ID=48062378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210587739.8A Expired - Fee Related CN103049746B (en) 2012-12-30 2012-12-30 Detection based on face recognition is fought the method for behavior

Country Status (1)

Country Link
CN (1) CN103049746B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345891A (en) * 2017-01-23 2018-07-31 北京京东尚科信息技术有限公司 Books contour extraction method and device
CN110166521A (en) * 2019-03-28 2019-08-23 浙江绚飞信息科技有限公司 Safety of student monitoring and managing method and system, storage medium based on intelligent school badge
CN111783777A (en) * 2020-07-07 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180880A (en) * 2005-02-15 2008-05-14 实物视频影像公司 Video surveillance system employing video primitives
CN101324920A (en) * 2007-06-15 2008-12-17 上海银晨智能识别科技有限公司 Method for searching human face remarkable characteristic and human face comparison method
CN101464952A (en) * 2007-12-19 2009-06-24 中国科学院自动化研究所 Abnormal behavior identification method based on contour
CN101557506A (en) * 2009-05-19 2009-10-14 浙江工业大学 Intelligent detecting device for violent behavior in elevator car based on computer vision
WO2009143279A1 (en) * 2008-05-20 2009-11-26 Ooyala, Inc. Automatic tracking of people and bodies in video
CN101652784A (en) * 2007-03-02 2010-02-17 宝洁公司 Method and apparatus for simulation of facial skin aging and de-aging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180880A (en) * 2005-02-15 2008-05-14 实物视频影像公司 Video surveillance system employing video primitives
CN101652784A (en) * 2007-03-02 2010-02-17 宝洁公司 Method and apparatus for simulation of facial skin aging and de-aging
CN101324920A (en) * 2007-06-15 2008-12-17 上海银晨智能识别科技有限公司 Method for searching human face remarkable characteristic and human face comparison method
CN101464952A (en) * 2007-12-19 2009-06-24 中国科学院自动化研究所 Abnormal behavior identification method based on contour
WO2009143279A1 (en) * 2008-05-20 2009-11-26 Ooyala, Inc. Automatic tracking of people and bodies in video
CN101557506A (en) * 2009-05-19 2009-10-14 浙江工业大学 Intelligent detecting device for violent behavior in elevator car based on computer vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345891A (en) * 2017-01-23 2018-07-31 北京京东尚科信息技术有限公司 Books contour extraction method and device
CN110166521A (en) * 2019-03-28 2019-08-23 浙江绚飞信息科技有限公司 Safety of student monitoring and managing method and system, storage medium based on intelligent school badge
CN111783777A (en) * 2020-07-07 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111783777B (en) * 2020-07-07 2023-11-24 抖音视界有限公司 Image processing method, apparatus, electronic device, and computer readable medium

Also Published As

Publication number Publication date
CN103049746B (en) 2015-07-29

Similar Documents

Publication Publication Date Title
CN103400110B (en) Abnormal face detecting method before ATM cash dispenser
CN103020611A (en) Method for detecting fighting behaviors
CN102081918B (en) Video image display control method and video image display device
Zhang et al. Fast and robust occluded face detection in ATM surveillance
CN102289660B (en) Method for detecting illegal driving behavior based on hand gesture tracking
CN107368778A (en) Method for catching, device and the storage device of human face expression
CN103077375B (en) Detect the method for behavior of fighting
CN101383001A (en) Quick and precise front human face discriminating method
CN103679118A (en) Human face in-vivo detection method and system
WO2018218839A1 (en) Living body recognition method and system
CN103077373B (en) The method detecting behavior of fighting is pushed and shoved based on upper limbs
CN103093274A (en) Pedestrian counting method based on video
Radu et al. A robust sclera segmentation algorithm
CN106599785A (en) Method and device for building human body 3D feature identity information database
Rezaei et al. 3D cascade of classifiers for open and closed eye detection in driver distraction monitoring
CN105893963A (en) Method for screening out optimal easily-recognizable frame of single pedestrian target in video
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN103049746B (en) Detection based on face recognition is fought the method for behavior
CN106407904B (en) A kind of method and device in determining fringe region
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
KR101985869B1 (en) A livestock theft surveillance apparatus using morphological feature-based model and method thereof
Hernández et al. People counting with re-identification using depth cameras
CN108491798A (en) Face identification method based on individualized feature
CN103077374A (en) Method for detecting fighting behavior based on up limb height
Savakis et al. Low vision assistance using face detection and tracking on android smartphones

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 100085 A, block 06A, 28 information road, Beijing, Haidian District

Patentee after: Frame robot technology (Beijing) Co., Ltd.

Address before: 100085, Beijing, Haidian District on the road to information on the 1st 28 building A-6

Patentee before: Xinzheng Electronic Technology (Beijing) Co., Ltd.

TR01 Transfer of patent right

Effective date of registration: 20180625

Address after: 100096 Haidian District, Beijing, west 2 new apartment building, three floor commercial room 337.

Patentee after: Letter frame technology (Beijing) Co., Ltd.

Address before: 100085 06A, block A, block 28, Shang Di Road, Haidian District, Beijing.

Patentee before: Frame robot technology (Beijing) Co., Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150729

Termination date: 20191230

CF01 Termination of patent right due to non-payment of annual fee