CN101600095A - A kind of video frequency monitoring method and video monitoring system - Google Patents

A kind of video frequency monitoring method and video monitoring system Download PDF

Info

Publication number
CN101600095A
CN101600095A CN 200910040774 CN200910040774A CN101600095A CN 101600095 A CN101600095 A CN 101600095A CN 200910040774 CN200910040774 CN 200910040774 CN 200910040774 A CN200910040774 A CN 200910040774A CN 101600095 A CN101600095 A CN 101600095A
Authority
CN
China
Prior art keywords
video data
data
video
pixel
presumptive area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200910040774
Other languages
Chinese (zh)
Other versions
CN101600095B (en
Inventor
谢佳亮
张丛喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN 200910040774 priority Critical patent/CN101600095B/en
Publication of CN101600095A publication Critical patent/CN101600095A/en
Application granted granted Critical
Publication of CN101600095B publication Critical patent/CN101600095B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of video frequency monitoring method and video monitoring system, video frequency monitoring method comprises step: by first camera collection, first video data, by second camera collection, second video data; When described first video data and second video data are overlapping, then extract first overlapped data and second overlapped data; Described first overlapped data is converted to first gray-scale map, described second overlapped data is converted to second gray-scale map; To described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary; According to described positional information described first video data and second video data are spliced.It is clear that the video data that the invention enables first camera and second camera to export compares complete sum, the overlapping video data can not occur.

Description

A kind of video frequency monitoring method and video monitoring system
Technical field
The present invention relates to field of computer technology, relate in particular to a kind of video frequency monitoring method and video monitoring system.
Background technology
In the prior art, it is to carry out video monitoring by a camera is installed that certain zone is monitored; In the time of will carrying out video monitoring to the place in big zone, then need to install a plurality of cameras, and the video data of a plurality of camera outputs may have lap, cause the video data of lap fuzzyyer, influenced the integrality and the definition of output video data, watched very inconvenient like this.
Summary of the invention
The invention provides a kind of video frequency monitoring method and video monitoring system, improved the integrality and the definition of the video data of a plurality of camera outputs.
Technical scheme of the present invention is: a kind of video frequency monitoring method, and it comprises step:
Step 1, by first camera collection, first video data, by second camera collection, second video data;
Step 2, when described first video data and second video data are overlapping, then extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region;
Step 3, described first overlapped data is converted to first gray-scale map, described second overlapped data is converted to second gray-scale map;
Step 4, to described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary;
Step 5, go out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map, described first video data and second video data are spliced according to described vertical and horizontal offset according to described positional information calculation.
The present invention has also disclosed a kind of video monitoring system, and it comprises:
First camera is used to gather first video data;
Second camera is used to gather second video data;
Extraction module when described first video data and second video data are overlapping, is used to extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region;
Modular converter is used for described first overlapped data is converted to first gray-scale map, and described second overlapped data is converted to second gray-scale map;
Computing module, be used for described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary;
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map according to described positional information calculation, according to described vertical and horizontal offset described first video data and second video data is spliced.
Video frequency monitoring method of the present invention and video monitoring system, described video frequency monitoring method can be when described first video data and second video data be overlapping, calculate the vertical and horizontal offset of relative second video data of first video data, according to described vertical and horizontal offset described first video data and second video data are spliced processing, make the video data comparison complete sum of win camera and the output of second camera clear, the overlapping video data can not occur, make things convenient for the user to check; Owing to the wide coverage of a plurality of cameras, then can realize in addition large-scale video monitoring.
Description of drawings
Fig. 1 is a video frequency monitoring method of the present invention flow chart in one embodiment;
Fig. 2 is a video frequency monitoring method of the present invention flow chart in another embodiment;
Fig. 3 is a video monitoring system of the present invention structured flowchart in one embodiment;
Fig. 4 is a video monitoring system of the present invention structured flowchart in another embodiment.
Embodiment
Video frequency monitoring method of the present invention and video monitoring system, described video frequency monitoring method can be when described first video data and second video data be overlapping, calculate the vertical and horizontal offset of relative second video data of first video data, according to described vertical and horizontal offset described first video data and second video data are spliced processing, make the video data comparison complete sum of win camera and the output of second camera clear, the overlapping video data can not occur, make things convenient for the user to check; Owing to the wide coverage of a plurality of cameras, then can realize in addition large-scale video monitoring.
Below in conjunction with accompanying drawing specific embodiments of the invention are done a detailed elaboration.
Video frequency monitoring method of the present invention as Fig. 1, comprises step:
S101, by first camera collection, first video data, by second camera collection, second video data; The angle that first camera and second camera are installed can be installed as required, enlarges the scope of video monitoring by two cameras are installed.
S102, when described first video data and second video data are overlapping, then extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region; In a preferred embodiment, before when described first video data and second video data are overlapping, can also comprise step: it is overlapping whether the video data that the setting angle by described first camera and second camera detects described first camera and the output of second camera has, and also can judge whether the video data of described first camera and the output of second camera has overlapping by other method.After having judged the overlapping angle of first video data and second video data, can extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data according to described overlapping angle in the overlapping region.
S103, described first overlapped data is converted to first gray-scale map, described second overlapped data is converted to second gray-scale map.
S104, to described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary.
First presumptive area can be a part of data of first gray-scale map, it also can be the total data of first gray-scale map, second presumptive area can be a part of data of second gray-scale map, it also can be the total data of second gray-scale map, when first presumptive area be the total data of first gray-scale map, when second presumptive area is the total data of second gray-scale map, the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary is the most accurate.This piece adaptation function can be the average of calculating pixel difference absolute value in a preferred embodiment, can certainly adopt other piece adaptation function to calculate.Adopt by pixel method relatively, the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary, specifically can be: get each pixel of first presumptive area and each pixel of second presumptive area respectively, calculate the average of pixel difference absolute value of each pixel of each pixel of first presumptive area and second presumptive area, the minimum value of the average of the plain difference of capture absolute value, certain locations of pixels information of certain the locations of pixels information of first presumptive area and second presumptive area when average of recording pixel difference absolute value is got minimum value.
S105, go out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map, described first video data and second video data are spliced according to described vertical and horizontal offset according to described positional information calculation.Can know the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map according to described positional information, thereby can splice first video data and second video data according to vertical and horizontal offset.
Need to prove, in actual applications, can also gather video data, design according to actual needs by more than two cameras, the video monitoring range of camera is wider like this, only shows first camera and second camera in the above-described embodiments.
In order to prevent that first video data or second video data from shake occurring or deformity occurring, in a preferred embodiment, described first video data or second video data are through the data behind the video correction, as Fig. 2, can also comprise described video correction process S1011 between step S101 and step S102, described first video data or second video data are carried out video correction, detailed process comprises as follows:
Step 1, described first video data of preliminary election or second video data as benchmark image, obtain the angle point of described benchmark image in the data of predetermined instant; This angle point promptly is the characteristic point of described benchmark image;
Step 2, obtain the match point of described first video data or the described relatively angle point of the data of second video data behind described predetermined instant by zone coupling;
Step 3, calculate the motion excursion amount of described match point with respect to described angle point according to affine model; According to described motion excursion amount described first video data or second video data are carried out motion compensation.
In a preferred embodiment, described affine model is four parameter affine models.
In a preferred embodiment, in described step S1011, can also comprise step: the distinguishing characteristics that detects described first video data or the described relatively benchmark image of the data of second video data behind described predetermined instant, when described distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the described predetermined instant.This distinguishing characteristics can be features such as light or side-play amount, and predetermined condition is predefined, is predefined light or side-play amount equivalence, when light or side-play amount etc. satisfies predefined value, just needs to upgrade benchmark image.By bringing in constant renewal in benchmark image, can improve the effect of video correction, prevent that small shake from appearring in video data.
Below in conjunction with specific embodiment the method for above-mentioned first video data and the splicing of second video data is done a detailed elaboration:
Consider that first video data and second video data need do the splicing on level and the vertical direction, taked by pixel way relatively.If first video data is I, second video data is J, and its size is respectively Wi, hi and Wj, hj, and hi=hj is then arranged, and Wi and Wj represent the width of first video data and second video data, and hi and hj represent the height of first video data and second video data.
1. getting I the right and J left side size is
Figure A20091004077400101
The zone be the overlapping region, change respectively
Changing first video data and second video data data in the overlapping region is gray-scale map I rAnd J l
2. establish w for comparing width, h is than height,
For?h=(-Δh,+Δh)
For W = ( 1 , min ( w i 3 , w j 3 ) )
If?h<0
Calculate D[I r(1~w ,-h~h i), J l(1~w, 1~h+h j)];
Else
Calculate D[I r(1~w, 1~h i-h), J l(1~w, h~h j)];
Get the coordinate information of the corresponding pixel of minimum D value, calculate the vertical and horizontal offset of relative second video data of first video data according to the coordinate information of the pixel of correspondence, first video data and second video data are spliced according to described vertical and horizontal offset.
Wherein (I J) is the piece adaptation function to D, is defined as the average of pixel difference absolute value here.I (x 1~x 2, y 1~y 2) represent to get the subimage of image I, its coordinate range is x ∈ [x 1, x 2], y ∈ [y 1, y 2].The video data coordinate is since 1.Δ h is the hunting zone on the vertical direction of being scheduled to, and can get h/20 in realization.
The present invention has also disclosed a kind of video monitoring system, as Fig. 3, comprising: first camera, second camera, extraction module, modular converter, computing module and concatenation module; The output of described first camera and second camera is connected with the input of extraction module respectively, and the output of extraction module is connected with concatenation module by modular converter, computing module successively;
First camera is used to gather first video data;
Second camera is used to gather second video data; The angle that first camera and second camera are installed can be installed as required, can enlarge the scope of video monitoring by two cameras are installed;
Extraction module when described first video data and second video data are overlapping, is used to extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region; In a preferred embodiment, described extraction module also be used for according to the angle that described first camera and second camera are installed detect described first video data and second video data whether overlapping, also can detect described first video data by other method and second video data whether overlapping; After having detected overlapping angle, can extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data according to described overlapping angle in the overlapping region;
Modular converter is used for described first overlapped data is converted to first gray-scale map, and described second overlapped data is converted to second gray-scale map;
Computing module, be used for described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary; In a preferred embodiment, described adaptation function is the average that is used for calculating pixel difference absolute value, also can adopt other piece adaptation function to calculate; The positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary, be specifically as follows: calculate the average of absolute value of difference of the pixel of the pixel of described first presumptive area and second presumptive area, the locations of pixels information of getting the locations of pixels information of pairing described first presumptive area of minimum mean and described second presumptive area is as the positional information that is complementary.
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map according to described positional information calculation, according to described vertical and horizontal offset described first video data and second video data is spliced.
In a preferred embodiment, as Fig. 4, video monitoring system of the present invention also comprises the video correction module, is connected between the input of the output of described first camera and second camera and described extraction module, is used for:
Described first video data of preliminary election or second video data as benchmark image, obtain the angle point of described benchmark image in the data of predetermined instant;
Obtain the match point of described first video data or the described relatively angle point of the data of second video data behind described predetermined instant by the zone coupling;
Calculate the motion excursion amount of described match point according to affine model with respect to described angle point; According to described motion excursion amount described first video data or second video data are carried out motion compensation.
So just can carry out video correction, prevent that the phenomenon of shake or deformity from appearring in first video data or second video data first video data or second video data.
In a preferred embodiment, described affine model is four parameter affine models.
In a preferred embodiment, video monitoring system of the present invention also comprises the detection module that is connected with described video correction module, it is used to detect the distinguishing characteristics of described first video data or the described relatively benchmark image of the data of second video data behind described predetermined instant, when described distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the described predetermined instant.This distinguishing characteristics can be features such as light or side-play amount, and predetermined condition is predefined, is the sizes values of predefined light or side-play amount etc., when light or side-play amount etc. satisfies predefined sizes values, just needs to upgrade benchmark image.By bringing in constant renewal in benchmark image, can improve the effect of video correction, prevent that small shake from appearring in video data.
Above-described embodiment of the present invention does not constitute the qualification to protection range of the present invention.Any modification of being done within the spirit and principles in the present invention, be equal to and replace and improvement etc., all should be included within the claim protection range of the present invention.

Claims (10)

1, a kind of video frequency monitoring method is characterized in that, comprises step:
Step 1, by first camera collection, first video data, by second camera collection, second video data;
Step 2, when described first video data and second video data are overlapping, then extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region;
Step 3, described first overlapped data is converted to first gray-scale map, described second overlapped data is converted to second gray-scale map;
Step 4, to described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary;
Step 5, go out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map, described first video data and second video data are spliced according to described vertical and horizontal offset according to described positional information calculation.
2, video frequency monitoring method according to claim 1 is characterized in that: described first video data or second video data are the data through video correction, and described video correction process comprises step:
Described first video data of preliminary election or second video data as benchmark image, obtain the angle point of described benchmark image in the data of predetermined instant;
Obtain the match point of described first video data or the described relatively angle point of the data of second video data behind described predetermined instant by the zone coupling;
Calculate the motion excursion amount of described match point according to affine model with respect to described angle point; According to described motion excursion amount described first video data or second video data are carried out motion compensation.
3, video frequency monitoring method according to claim 1, it is characterized in that: in the step 2, before when the video data of described first camera and second camera output is overlapping, also comprise step: whether the setting angle by first camera and second camera detects the video data of described first camera and the output of second camera overlapping.
4, according to claim 1 or 2 or 3 described video frequency monitoring methods, it is characterized in that: the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary, be specially: calculate the average of absolute value of difference of the pixel of the pixel of described first presumptive area and second presumptive area, the locations of pixels information of getting the locations of pixels information of pairing described first presumptive area of minimum mean and described second presumptive area is as the positional information that is complementary.
5, video frequency monitoring method according to claim 2, it is characterized in that: between step 1 and step 2, also comprise step: the distinguishing characteristics that detects described first video data or the described relatively benchmark image of the data of second video data behind described predetermined instant, when described distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the described predetermined instant.
6, according to claim 2 or 5 described video frequency monitoring methods, it is characterized in that: described affine model is four parameter affine models.
7, a kind of video monitoring system is characterized in that, comprising:
First camera is used to gather first video data;
Second camera is used to gather second video data;
Extraction module when described first video data and second video data are overlapping, is used to extract described first video data in first overlapped data of overlapping region and described second video data second overlapped data in the overlapping region;
Modular converter is used for described first overlapped data is converted to first gray-scale map, and described second overlapped data is converted to second gray-scale map;
Computing module, be used for described first gray-scale map in the pixel of first presumptive area and second gray-scale map pixel in second presumptive area, adopt by pixel method relatively the positional information that the pixel of utilizing the piece adaptation function to calculate the pixel of described first presumptive area and described second presumptive area is complementary;
Concatenation module goes out the vertical and horizontal offset of relative second gray-scale map of described first gray-scale map according to described positional information calculation, according to described vertical and horizontal offset described first video data and second video data is spliced.
8, video monitoring system according to claim 7 is characterized in that: also comprise the video correction module, be connected between the input of the output of described first camera and second camera and described extraction module;
Be used for described first video data of preliminary election or second video data in the data of predetermined instant as benchmark image, obtain the angle point of described benchmark image;
Obtain the match point of described first video data or the described relatively angle point of the data of second video data behind described predetermined instant by the zone coupling;
Calculate the motion excursion amount of described match point according to affine model with respect to described angle point; According to described motion excursion amount described first video data or second video data are carried out motion compensation.
9, video monitoring system according to claim 8, it is characterized in that, also comprise the detection module that is connected with described video correction module, be used to detect the distinguishing characteristics of described first video data or the described relatively benchmark image of the data of second video data behind described predetermined instant, when described distinguishing characteristics satisfies predetermined condition, be benchmark image with the Data Update behind the described predetermined instant.
10, according to Claim 8 or 9 described video monitoring systems, it is characterized in that: described affine model is four parameter affine models.
CN 200910040774 2009-07-02 2009-07-02 Video monitoring method and video monitoring system Expired - Fee Related CN101600095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910040774 CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910040774 CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Publications (2)

Publication Number Publication Date
CN101600095A true CN101600095A (en) 2009-12-09
CN101600095B CN101600095B (en) 2012-12-19

Family

ID=41421304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910040774 Expired - Fee Related CN101600095B (en) 2009-07-02 2009-07-02 Video monitoring method and video monitoring system

Country Status (1)

Country Link
CN (1) CN101600095B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801966A (en) * 2012-08-29 2012-11-28 上海天跃科技股份有限公司 Camera coverage zone overlapping algorithm and monitoring system
CN104754292A (en) * 2013-12-31 2015-07-01 浙江大华技术股份有限公司 Method and device for determining parameters for compensation treatment of video signal
CN116437127A (en) * 2023-06-13 2023-07-14 典基网络科技(上海)有限公司 Video cartoon optimizing method based on user data sharing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100338631C (en) * 2003-07-03 2007-09-19 马堃 On-site panoramic imagery method of digital imaging device
KR100725053B1 (en) * 2006-06-22 2007-06-08 삼성전자주식회사 Apparatus and method for panorama photographing in portable terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801966A (en) * 2012-08-29 2012-11-28 上海天跃科技股份有限公司 Camera coverage zone overlapping algorithm and monitoring system
CN102801966B (en) * 2012-08-29 2015-10-28 上海天跃科技股份有限公司 A kind of camera covering area overlapping algorithm and supervisory control system
CN104754292A (en) * 2013-12-31 2015-07-01 浙江大华技术股份有限公司 Method and device for determining parameters for compensation treatment of video signal
CN104754292B (en) * 2013-12-31 2017-12-19 浙江大华技术股份有限公司 Vision signal compensates determination method for parameter and device used in processing
CN116437127A (en) * 2023-06-13 2023-07-14 典基网络科技(上海)有限公司 Video cartoon optimizing method based on user data sharing
CN116437127B (en) * 2023-06-13 2023-08-11 典基网络科技(上海)有限公司 Video cartoon optimizing method based on user data sharing

Also Published As

Publication number Publication date
CN101600095B (en) 2012-12-19

Similar Documents

Publication Publication Date Title
CN103108108B (en) Image stabilizing method and image stabilizing device
EP3076654A1 (en) Camera calibration device
CN102270344A (en) Moving object detection apparatus and moving object detection method
CN113252053B (en) High-precision map generation method and device and electronic equipment
CN103577062A (en) Display adjustment method and computer program product thereof
JP6975003B2 (en) Peripheral monitoring device and its calibration method
US20150262343A1 (en) Image processing device and image processing method
CN101600095B (en) Video monitoring method and video monitoring system
CN104135614A (en) Camera displacement compensation method and device
CN102141407B (en) Map building system and method and recording media
CN104376573A (en) Image spot detecting method and system
US20100145618A1 (en) Vehicle collision management systems and methods
CN112084453A (en) Three-dimensional virtual display system, method, computer equipment, terminal and storage medium
CN102469275A (en) Method and device for bad pixel compensation
CN110876053A (en) Image processing device, driving support system, and recording medium
CN110740315B (en) Camera correction method and device, electronic equipment and storage medium
US20130241803A1 (en) Screen sharing apparatus, screen sharing method and screen sharing program
US20120008871A1 (en) Image output device and method for outputting image using the same
CN103176349B (en) Lens detection device and method
CN103185546A (en) Width measuring method and system
CN101526848B (en) Coordinate judging system and method
CN103809818A (en) Method for judging whether lens device deviates and optical touch system thereof
CN110794397B (en) Target detection method and system based on camera and radar
CN103000161B (en) A kind of method for displaying image, device and a kind of intelligent hand-held terminal
CN104794680A (en) Multi-camera image mosaicking method and multi-camera image mosaicking device based on same satellite platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Free format text: FORMER OWNER: XIE JIALIANG

Effective date: 20140925

Free format text: FORMER OWNER: ZHANG CONGZHE

Effective date: 20140925

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 510620 GUANGZHOU, GUANGDONG PROVINCE TO: 510670 GUANGZHOU, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20140925

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

Address before: 510620 science Avenue 182, Science City, Guangdong, Guangzhou, C2203

Patentee before: Xie Jialiang

Patentee before: Zhang Congzhe

ASS Succession or assignment of patent right

Owner name: XIE JIALIANG

Free format text: FORMER OWNER: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Effective date: 20141023

Owner name: ZHANG CONGZHE

Effective date: 20141023

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20141023

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Xie Jialiang

Patentee after: Zhang Congzhe

Address before: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee before: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

ASS Succession or assignment of patent right

Owner name: GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO.,LTD.

Free format text: FORMER OWNER: XIE JIALIANG

Effective date: 20150226

Free format text: FORMER OWNER: ZHANG CONGZHE

Effective date: 20150226

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150226

Address after: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee after: Guangzhou Jiaqi Intelligent Technology Co.,Ltd.

Address before: 18, building 510670, building A5, headquarters Economic Zone, 243 science Avenue, Luogang District Science City, Guangdong, Guangzhou

Patentee before: Xie Jialiang

Patentee before: Zhang Congzhe

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121219

Termination date: 20210702

CF01 Termination of patent right due to non-payment of annual fee