CN107563359A - Recognition of face temperature analysis generation method is carried out for dense population - Google Patents

Recognition of face temperature analysis generation method is carried out for dense population Download PDF

Info

Publication number
CN107563359A
CN107563359A CN201710912520.3A CN201710912520A CN107563359A CN 107563359 A CN107563359 A CN 107563359A CN 201710912520 A CN201710912520 A CN 201710912520A CN 107563359 A CN107563359 A CN 107563359A
Authority
CN
China
Prior art keywords
image
face
human body
axis
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710912520.3A
Other languages
Chinese (zh)
Other versions
CN107563359B (en
Inventor
杨晓凡
刘玉蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Rongkun Technology Co.,Ltd.
Original Assignee
Chongqing City Intellectual Property Road Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing City Intellectual Property Road Science And Technology Co Ltd filed Critical Chongqing City Intellectual Property Road Science And Technology Co Ltd
Priority to CN201710912520.3A priority Critical patent/CN107563359B/en
Publication of CN107563359A publication Critical patent/CN107563359A/en
Application granted granted Critical
Publication of CN107563359B publication Critical patent/CN107563359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention proposes one kind and carries out recognition of face temperature analysis generation method for dense population, including:S1, characteristics of human body's image in dense population and face characteristic image are acquired respectively by image capture module, establish and initially sentence the disconnected model of screening, so as to extract personnel's attribute into crowded region.

Description

Recognition of face temperature analysis generation method is carried out for dense population
Technical field
The present invention relates to big data analysis field, more particularly to one kind to carry out recognition of face temperature analysis for dense population Generation method.
Background technology
Today's society personnel transfer is frequent, and on market, station, airport etc., stream of people's close quarters has substantial amounts of video monitor Equipment, but be only for carrying out close quarters simple IMAQ, follow-up classification and differentiation are not carried out to image, But due to crowded complicated in social life, it is necessary to rationally be advised to the personnel and place in the crowded region that comes in and goes out Draw, take corresponding management and configuration, so that the food and drink in crowded region, plugging into traffic and gateway can rationally match somebody with somebody Put, after great amount of images characteristic information is obtained, prior art can not be sorted out or sorted out inaccuracy to it, cause the later stage When carrying out crowded region division, data sample reference can not be provided, this just needs those skilled in the art badly and solved accordingly Technical problem.
The content of the invention
It is contemplated that at least solving technical problem present in prior art, one kind is especially innovatively proposed for close Collection crowd carries out recognition of face temperature analysis generation method.
In order to realize the above-mentioned purpose of the present invention, recognition of face temperature is carried out for dense population the invention provides one kind Generation method is analyzed, including:
S1, characteristics of human body's image in dense population and face characteristic image are adopted respectively by image capture module Collection, establish and initially sentence the disconnected model of screening, so as to extract personnel's attribute into crowded region.
Described carries out recognition of face temperature analysis generation method for dense population, it is preferred that the S1 includes:
S1-1, it is assumed that same user into crowded region be all new user, if wherein including attendant or The personnel frequently passed in and out, in this model it is not intended that, because after the enough sample of collection, attendant or frequently The quantity of the personnel of disengaging can be ignored, and be left from crowded region, is set as that corresponding personnel's certification terminates, passes through The image information for obtaining image capture module is judged characteristics of human body's image in image and face characteristic image, sets figure It is original according to coordinate [x, y] by basic point of the coordinate [x, y] as image as data message coordinate [x, y] progress image acquisition Point sets scanning weight respectively
Wherein p is that the number in image obtains the factor, carries out extraction of square root computing to four orientation of [x, y] coordinate, n is just Integer, nvalidFor the efficiently individual quantity decision threshold of acquisition, h (i, j) is the characteristics of human body's image i and face obtained in an orientation Characteristic image j number,
S1-2, if characteristics of human body's image weights vector acquired in an orientation is bi=A (c-w) × (cw), A are Human body foundation characteristic c and hand strap article characteristics w probability of occurrence value, cw are that human body foundation characteristic and hand strap article characteristics are common The definition value of appearance;It is f to obtain an orientation face characteristic image weights vectorj=B × (CT), B are to obtain face characteristic Probable value, C are face expressive features set, and T is the statistics coefficient for the unit area that face is identified as work(;Wherein C= {smile,openmouth,downhead,uphead,weeping,halfface}
S1-3, ensure to obtain the stability of information, according to biAnd fjVector value choose multizone sample calculated, Then preliminary screening formula is passed throughPreliminary screening is carried out to image, Wherein, λ4For the calculating parameter of j-th of human face expression set of comprehensive i-th of characteristics of human body image in image, β4For in image The match parameter of j-th of human face expression set of comprehensive i-th of characteristics of human body image, Li,jTotally go out for the personnel in image Occurrence number, Qi,jFor the conditional probability value in the crowded region during preliminary screening in image, σ2(i, j) is crowded The judgement Extreme Parameters of area people dense degree, Pi,jFor history demographic's numerical value of stream of people's close quarters;
S1-4, after judging by above-mentioned primary dcreening operation, classification judgement is carried out to characteristics of image, by different face expressive features collection The view data for closing C carries out model judgement;The histogram of effective characteristics of human body's image is extracted, constructs texture information, people is obtained and connects Each property value in expressive features set,
Smile property value Csmilejj·δxj·δyj, wherein δxjAnd δyjRespectively the X-axis smile characteristics factor and Y-axis are special Levy the factor;
Open one's mouth property value Copenmouthjj·τxjτyj, wherein τxjAnd τyjRespectively X-axis is opened one's mouth characterization factor and Y-axis Mouth characterization factor;
Bow property value Cdownheadjj·βxjβyj, wherein βxjAnd βyjRespectively X-axis bow characterization factor and Y-axis it is low Head characterization factor;
New line property value Cuphead=∑jj·εxj·εyj, wherein εxjAnd εyjRespectively X-axis new line characterization factor and Y-axis lift Head characterization factor;
Sobbing property valueWhereinWithRespectively X-axis sobbing characterization factor and Y Axle sobbing characterization factor;
Side face property value Chalfface=∑jj·μxj·μyj, wherein μxjAnd μyjRespectively X-axis side face characterization factor and Y-axis Side face characterization factor;
Primary dcreening operation is repeated, until caused repetitive rate rising after, terminate S1-1 to S1-3 the step of.
In summary, by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
After the present invention to image by being acquired, according to the facial information of personnel and crowded region is passed in and out The bodily form and wearing difference are classified, perfect so as to which the corresponding auxiliary facility in the crowded region is carried out, and are passed through The sorter model is classified, and consuming system resource is small, saves time overhead, so as to provide conjunction for stream of people's close quarters The allocation plan of reason, be advantageous to personnel and dredge and personnel re-assignment.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination accompanying drawings below to embodiment Substantially and it is readily appreciated that, wherein:
Fig. 1 is general illustration of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.
In the description of the invention, it is to be understood that term " longitudinal direction ", " transverse direction ", " on ", " under ", "front", "rear", The orientation or position relationship of the instruction such as "left", "right", " vertical ", " level ", " top ", " bottom ", " interior ", " outer " is based on accompanying drawing institutes The orientation or position relationship shown, it is for only for ease of the description present invention and simplifies description, rather than instruction or the dress for implying meaning Put or element there must be specific orientation, with specific azimuth configuration and operation, therefore it is not intended that to limit of the invention System.
In the description of the invention, unless otherwise prescribed with limit, it is necessary to explanation, term " installation ", " connected ", " connection " should be interpreted broadly, for example, it may be mechanical connection or electrical connection or the connection of two element internals, can To be to be joined directly together, can also be indirectly connected by intermediary, for the ordinary skill in the art, can basis Concrete condition understands the concrete meaning of above-mentioned term.
As shown in figure 1, the inventive method comprises the following steps:
S1, characteristics of human body's image in dense population and face characteristic image are adopted respectively by image capture module Collection, establish and initially sentence the disconnected model of screening, so as to extract personnel's attribute into crowded region;
S2, after being judged according to initial screening judgment models characteristics of human body's image and face characteristic image, it will leave The personnel in crowded region carry out matching collection again, by grader distinguish region that corresponding dense population reached or The respective nodes that person leaves, so as to be pushed to terminal.
S1-1, it is assumed that same user into crowded region be all new user, if wherein including attendant or The personnel frequently passed in and out, in this model it is not intended that, because after the enough sample of collection, attendant or frequently The quantity of the personnel of disengaging can be ignored, and be left from crowded region, is set as that corresponding personnel's certification terminates, passes through The image information for obtaining image capture module is judged characteristics of human body's image in image and face characteristic image, sets figure It is original according to coordinate [x, y] by basic point of the coordinate [x, y] as image as data message coordinate [x, y] progress image acquisition Point sets scanning weight respectively
Wherein p is that the number in image obtains the factor, carries out extraction of square root computing to four orientation of [x, y] coordinate, n is just Integer, nvalidFor the efficiently individual quantity decision threshold of acquisition, h (i, j) is the characteristics of human body's image i and face obtained in an orientation Characteristic image j number,
S1-2, if characteristics of human body's image weights vector acquired in an orientation is bi=A (c-w) × (cw), A are people Body foundation characteristic c and hand strap article characteristics w probability of occurrence value, cw are that human body foundation characteristic and hand strap article characteristics go out jointly Existing definition value;It is f to obtain an orientation face characteristic image weights vectorj=B × (CT), B are that acquisition face characteristic is general Rate value, C are face expressive features set, and T is the statistics coefficient for the unit area that face is identified as work(;Wherein C=smile, openmouth,downhead,uphead,weeping,halfface}
S1-3, ensure to obtain the stability of information, according to biAnd fjVector value choose multizone sample calculated, Then preliminary screening formula is passed throughPreliminary screening is carried out to image, Wherein, λ4For the calculating parameter of j-th of human face expression set of comprehensive i-th of characteristics of human body image in image, β4For in image The match parameter of j-th of human face expression set of comprehensive i-th of characteristics of human body image, Li,jTotally go out for the personnel in image Occurrence number, Qi,jFor the conditional probability value in the crowded region during preliminary screening in image, σ2(i, j) is crowded The judgement Extreme Parameters of area people dense degree, Pi,jFor history demographic's numerical value of stream of people's close quarters;
S1-4, after judging by above-mentioned primary dcreening operation, classification judgement is carried out to characteristics of image, by different face expressive features collection The view data for closing C carries out model judgement;The histogram of effective characteristics of human body's image is extracted, constructs texture information, people is obtained and connects Each property value in expressive features set,
Smile property value Csmile=∑jj·δxj·δyj, wherein δxjAnd δyjRespectively the X-axis smile characteristics factor and Y-axis are special Levy the factor;
Open one's mouth property value Copenmouth=∑jj·τxjτyj, wherein τxjAnd τyjRespectively X-axis is opened one's mouth characterization factor and Y-axis Mouth characterization factor;
Bow property value Cdownhead=∑jj·βxjβyj, wherein βxjAnd βyjRespectively X-axis bow characterization factor and Y-axis it is low Head characterization factor;
New line property value Cuphead=∑jj·εxj·εyj, wherein εxjAnd εyjRespectively X-axis new line characterization factor and Y-axis lift Head characterization factor;
Sobbing property valueWhereinWithRespectively X-axis sobbing characterization factor and Y Axle sobbing characterization factor;
Side face property value Chalffacejj·μxj·μyj, wherein μxjAnd μyjRespectively X-axis side face characterization factor and Y-axis Side face characterization factor;
Primary dcreening operation is repeated, until caused repetitive rate rising after, terminate S1-1 to S1-3 the step of;
S2-1, whole crowded region image data is divided, forms besel sequence pair (M1,M2),(M2, M3),...,(Mn-1,Mn);The hand-held object boundary of characteristics of human body's image is positioned, since the initial frame head portion of video image;Positioning The access border of some characteristics of human body's image, from the crowded area that video image tail search characteristics of human body's image occurs The relevant position in domain, and judge the position that characteristics of human body's image occurs, residence time, and whether do shopping or hold Article;
S2-2, by besel sequence pair being compared crawl, before and after judgement one characteristics of human body's image of frame of video and The change degree of face characteristic image
Wherein, wherein | Ei,jLn+Ei,jMn| it is inquiry feature L to be matchednWith besel image MnSimilarity, E representatives Close quarters matching amount of images is flowed, S represents the interference set for influenceing characteristics of human body's image and face characteristic image, and s, t is just Integer, s, t value are different, and its minimum value is 1, and maximum occurrences are the characteristics of human body's image matched in matching characteristics of image figure With face characteristic image number;ωi,jThe weight of degree of correlation total degree, K are matched for face expressive features set CiTo be crowded Region carries out the penalty factor of characteristics of human body's image error matching, and z and d represent collection set and the people of characteristics of human body's image respectively The collection set of the next besel of body characteristicses image,
The change degree is subjected to information matches with the crowded regional location residing for corresponding image capture module, obtained The positive correlation conditional function of crowded regional location and change degree
Wherein, Y (x, y) and Z (x, y) represents to lack between characteristics of human body's image and face characteristic image coordinate point (x, y) respectively The interaction relationship of mistake, ηiAnd σjCharacteristics of human body's image judgment threshold and face characteristic image judgment threshold are represented respectively, and it is Positive number in open interval (0,1), rx,yRepresent similar with face characteristic image to characteristics of human body's image of coordinate (x, y) opening position Degree judges the factor,
S2-3, according to incidence relation between the characteristics of human body's image and face characteristic image of each individual of definition, according to pass Connection relation produces the non-dominant individual collections of different degree of correlation grades to the degree of correlation and data relevancy ranking is inquired about, according to people Non-dominant individual amount in body characteristicses image and face characteristic image gradation, sequence number grade it is small to big order from the degree of correlation, such as Fruit is not matched in the outlet of each stream of people's close quarters with characteristics of human body's image and face characteristic image any feature Correlation chart picture, step S2-1 is performed, if corresponding crowded regional location obtains correlation chart picture and in relevant position Signature is carried out, performs step S2-4;
S2-4, crowded zonelog is set, the attribute information in the crowded region is extracted according to user's request, is entered Row Similarity Measure, similarity is inquired about using characteristics of human body's figure Similarity Measure, calculated and looked into using face characteristic image similarity Similarity is ask, until daily record similarity and inquiry similarity convergence;The characteristics of human body of acquiescence is balanced by using matching weight α Image and face characteristic image correlativity and user define degree of correlation weighing result value D [i, j]=maxFi,j(1-α)·P(i, j)+α·P(i,j,rx,y)+minFi,jWherein, maxFi,jThe maximum of the change degree of characteristics of human body's image and face characteristic image, minFi,jThe minimum value of the change degree of characteristics of human body's image and face characteristic image, P (i, j) are stream of people's close quarters initial decision Decision value, P (i, j, rx,y) it is that stream of people's close quarters result judges decision value, rx,yRepresent the human body to coordinate (x, y) opening position Characteristic image and face characteristic image similarity judge the factor, and wherein initial decision decision value is according to history feature view data The initial decision of close quarters is carried out, judges that decision value is after being optimized after judging by S2-1 to S2-4 for result Judge decision value.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Necessarily refer to identical embodiment or example.Moreover, specific features, structure, material or the feature of description can be any One or more embodiments or example in combine in an appropriate manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that:Not In the case of departing from the principle and objective of the present invention a variety of change, modification, replacement and modification can be carried out to these embodiments, this The scope of invention is limited by claim and its equivalent.

Claims (2)

1. a kind of carry out recognition of face temperature analysis generation method for dense population, it is characterised in that including:
S1, characteristics of human body's image in dense population and face characteristic image are acquired respectively by image capture module, built It is vertical initially to sentence the disconnected model of screening, so as to extract personnel's attribute into crowded region.
2. according to claim 1 carry out recognition of face temperature analysis generation method for dense population, it is characterised in that The S1 includes:
S1-1, it is assumed that same user into crowded region be all new user, if wherein including attendant or frequently The personnel of disengaging, in this model it is not intended that, because after the enough sample of collection, attendant or frequently disengaging The quantity of personnel can be ignored, left from crowded region, be set as that corresponding personnel's certification terminates, pass through acquisition The image information of image capture module is judged characteristics of human body's image in image and face characteristic image, sets picture number Image acquisition is carried out according to information coordinate [x, y], is origin point according to coordinate [x, y] by basic point of the coordinate [x, y] as image Weight She Zhi not scanned
Wherein p is that the number in image obtains the factor, carries out extraction of square root computing to four orientation of [x, y] coordinate, n is just whole Number, nvalidFor the efficiently individual quantity decision threshold of acquisition, h (i, j) is the characteristics of human body's image i and face spy obtained in an orientation Image j number is levied,
S1-2, if characteristics of human body's image weights vector acquired in an orientation is bi=A (c-w) × (cw), A are human body bases Plinth feature c and hand strap article characteristics w probability of occurrence value, cw are determining of occurring jointly of human body foundation characteristic and hand strap article characteristics Justice value;It is f to obtain an orientation face characteristic image weights vectorj=B × (CT), B are to obtain face characteristic probable value, C For face expressive features set, T is the statistics coefficient for the unit area that face is identified as work(;Wherein C=smile, openmouth,downhead,uphead,weeping,halfface}
S1-3, ensure to obtain the stability of information, according to biAnd fjVector value choose multizone sample calculated, then Pass through preliminary screening formulaPreliminary screening is carried out to image, its In, λ4For the calculating parameter of j-th of human face expression set of comprehensive i-th of characteristics of human body image in image, β4To be complete in image The match parameter of j-th of human face expression set of i-th of orientation characteristics of human body's image, Li,jTotally occur for the personnel in image Number, Qi,jFor the conditional probability value in the crowded region during preliminary screening in image, σ2(i, j) is stream of people compact district The judgement Extreme Parameters of domain densely populated place degree, Pi,jFor history demographic's numerical value of stream of people's close quarters;
S1-4, after judging by above-mentioned primary dcreening operation, classification judgement is carried out to characteristics of image, by different face expressive features set C View data carry out model judgement;The histogram of effective characteristics of human body's image is extracted, constructs texture information, people is obtained and connects expression Each property value in characteristic set,
Smile property value Csmile=∑jj·δxj·δyj, wherein δxjAnd δyjRespectively the X-axis smile characteristics factor and Y-axis feature because Son;
Open one's mouth property value Copenmouth=∑jj·τxjτyj, wherein τxjAnd τyjRespectively X-axis opens one's mouth characterization factor and Y-axis is opened one's mouth spy Levy the factor;
Bow property value Cdownhead=∑jj·βxjβyj, wherein βxjAnd βyjRespectively X-axis bows characterization factor and Y-axis is bowed spy Levy the factor;
New line property value Cuphead=∑jj·εxj·εyj, wherein εxjAnd εyjRespectively X-axis new line characterization factor and Y-axis come back special Levy the factor;
Sobbing property valueWhereinWithRespectively X-axis sobbing characterization factor and Y-axis are cried Tears characterization factor;
Side face property value Chalfface=∑jj·μxj·μyj, wherein μxjAnd μyjRespectively X-axis side face characterization factor and Y-axis side face Characterization factor;
Primary dcreening operation is repeated, until caused repetitive rate rising after, terminate S1-1 to S1-3 the step of.
CN201710912520.3A 2017-09-29 2017-09-29 Recognition of face temperature is carried out for dense population and analyzes generation method Active CN107563359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710912520.3A CN107563359B (en) 2017-09-29 2017-09-29 Recognition of face temperature is carried out for dense population and analyzes generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710912520.3A CN107563359B (en) 2017-09-29 2017-09-29 Recognition of face temperature is carried out for dense population and analyzes generation method

Publications (2)

Publication Number Publication Date
CN107563359A true CN107563359A (en) 2018-01-09
CN107563359B CN107563359B (en) 2018-09-11

Family

ID=60984621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710912520.3A Active CN107563359B (en) 2017-09-29 2017-09-29 Recognition of face temperature is carried out for dense population and analyzes generation method

Country Status (1)

Country Link
CN (1) CN107563359B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889315A (en) * 2018-09-10 2020-03-17 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and system
CN111666439A (en) * 2020-05-28 2020-09-15 重庆渝抗医药科技有限公司 Working method for rapidly extracting and dividing medical image big data aiming at cloud environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021899A (en) * 2007-03-16 2007-08-22 南京搜拍信息技术有限公司 Interactive human face identificiating system and method of comprehensive utilizing human face and humanbody auxiliary information
US20110102553A1 (en) * 2007-02-28 2011-05-05 Tessera Technologies Ireland Limited Enhanced real-time face models from stereo imaging
CN106127173A (en) * 2016-06-30 2016-11-16 北京小白世纪网络科技有限公司 A kind of human body attribute recognition approach based on degree of depth study
CN106599785A (en) * 2016-11-14 2017-04-26 深圳奥比中光科技有限公司 Method and device for building human body 3D feature identity information database
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102553A1 (en) * 2007-02-28 2011-05-05 Tessera Technologies Ireland Limited Enhanced real-time face models from stereo imaging
CN101021899A (en) * 2007-03-16 2007-08-22 南京搜拍信息技术有限公司 Interactive human face identificiating system and method of comprehensive utilizing human face and humanbody auxiliary information
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN106127173A (en) * 2016-06-30 2016-11-16 北京小白世纪网络科技有限公司 A kind of human body attribute recognition approach based on degree of depth study
CN106599785A (en) * 2016-11-14 2017-04-26 深圳奥比中光科技有限公司 Method and device for building human body 3D feature identity information database

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙伟: "基于人体外貌特征的自适应检测与跟踪算法的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李远征: "人体目标跟踪和表情识别中的若干问题研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110889315A (en) * 2018-09-10 2020-03-17 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and system
CN111666439A (en) * 2020-05-28 2020-09-15 重庆渝抗医药科技有限公司 Working method for rapidly extracting and dividing medical image big data aiming at cloud environment

Also Published As

Publication number Publication date
CN107563359B (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN107644218A (en) The method of work of crowded region behavioural analysis judgement is realized based on image collecting function
CN109711281B (en) Pedestrian re-recognition and feature recognition fusion method based on deep learning
CN100464332C (en) Picture inquiry method and system
CN107273796A (en) A kind of fast face recognition and searching method based on face characteristic
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN109934176A (en) Pedestrian's identifying system, recognition methods and computer readable storage medium
CN110119656A (en) Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN101587485B (en) Face information automatic login method based on face recognition technology
CN109118479A (en) Defects of insulator identification positioning device and method based on capsule network
CN107341688A (en) The acquisition method and system of a kind of customer experience
CN106203391A (en) Face identification method based on intelligent glasses
CN110532970A (en) Age-sex's property analysis method, system, equipment and the medium of face 2D image
CN106682578A (en) Human face recognition method based on blink detection
CN109815864A (en) A kind of facial image age recognition methods based on transfer learning
CN110348505B (en) Vehicle color classification model training method and device and vehicle color identification method
CN108762503A (en) A kind of man-machine interactive system based on multi-modal data acquisition
CN107563359B (en) Recognition of face temperature is carried out for dense population and analyzes generation method
CN112150692A (en) Access control method and system based on artificial intelligence
KR102484950B1 (en) A waste classification system based on vision-hyperspectral fusion data
CN103034840A (en) Gender identification method
CN110110606A (en) The fusion method of visible light neural network based and infrared face image
CN106919929A (en) Insulator automatic identifying method in a kind of infrared image based on template matches
CN108647662A (en) A kind of method and system of automatic detection face
CN112069908B (en) Pedestrian re-identification method based on co-occurrence attribute

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201225

Address after: Room 103, No.108 Mingzhu Avenue, yong'anzhou Town, Gaogang District, Taizhou City, Jiangsu Province

Patentee after: Yang Jianxin

Address before: 402160 27-6 6 Xinglong Avenue, Yongchuan District, Chongqing, 27-6.

Patentee before: CHONGQING ZHIQUAN ZHILU TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210318

Address after: 518116 701-3, building F, Longjing Science Park, 335 Bulong Road, Ma'antang community, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Zhaohua Investment Development Co.,Ltd.

Address before: Room 103, No.108 Mingzhu Avenue, yong'anzhou Town, Gaogang District, Taizhou City, Jiangsu Province

Patentee before: Yang Jianxin

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210615

Address after: 506, 5th floor, No. 95, menmen Road, Baijiantan District, Karamay City, Xinjiang Uygur Autonomous Region 834000

Patentee after: Karamay ZHONGSHEN Energy Co.,Ltd.

Address before: 518116 701-3, building F, Longjing Science Park, 335 Bulong Road, Ma'antang community, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Zhaohua Investment Development Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211231

Address after: 834000 room 409, 4th floor, comprehensive training center building, No. 68, Tuanjie South Road, Shaya County, Aksu Prefecture, Karamay City, Xinjiang Uygur Autonomous Region

Patentee after: Xinjiang Rongkun Technology Co.,Ltd.

Address before: 506, 5th floor, No. 95, menmen Road, Baijiantan District, Karamay City, Xinjiang Uygur Autonomous Region 834000

Patentee before: Karamay ZHONGSHEN Energy Co.,Ltd.