CN104582186A - Vision capture illuminating system based on internet of things and control method thereof - Google Patents

Vision capture illuminating system based on internet of things and control method thereof Download PDF

Info

Publication number
CN104582186A
CN104582186A CN201510018838.8A CN201510018838A CN104582186A CN 104582186 A CN104582186 A CN 104582186A CN 201510018838 A CN201510018838 A CN 201510018838A CN 104582186 A CN104582186 A CN 104582186A
Authority
CN
China
Prior art keywords
internet
things
unit
lighting device
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510018838.8A
Other languages
Chinese (zh)
Other versions
CN104582186B (en
Inventor
顾思雨
杨鸿宇
余谦
程晓磊
李永基
庞辉庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huahuan Yunnan Technology Co ltd
Original Assignee
Beijing Danpu Famo Iot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Danpu Famo Iot Technology Co Ltd filed Critical Beijing Danpu Famo Iot Technology Co Ltd
Priority to CN201510018838.8A priority Critical patent/CN104582186B/en
Publication of CN104582186A publication Critical patent/CN104582186A/en
Application granted granted Critical
Publication of CN104582186B publication Critical patent/CN104582186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention belongs to the technical field of dynamic lighting, in particular to a vision capture illuminating system based on the internet of things and a control method of the illuminating system. The vision capture illuminating system is formed by connecting multiple illuminating devices. Each illuminating device comprises a frog eye bionic vision capturing unit, an internet of things coordinating unit and an illuminating device control unit, wherein the frog eye bionic vision capturing unit is connected with the internet of things coordinating unit, and the internet of things coordinating unit is connected with the illuminating device control unit. The multiple internet of things coordinating units are connected through a wireless multi-hop networking protocol, so that an intelligent coordination center is formed. The control method comprises the steps that the size, the movement direction, the movement speed and the acceleration of a moving object in a view field of the system are measured by the capturing units and sent to the internet of things coordinating units, the internet of things coordinating units send corresponding instructions to the control units of all the illuminating devices, in this way, it is guaranteed that dynamic illuminating support effect is achieved in the advancing direction of the moving object, and a sufficient view field range and sufficient light brightness of the illuminating system and the response time in emergency are guaranteed.

Description

A kind of illuminator based on Internet of Things visual capture and control method thereof
Technical field
The invention belongs to dynamic lighting technical field, particularly relate to a kind of illuminator based on Internet of Things visual capture and control method thereof.
Background technology
Illumination support is the mankind carrying out without (as night, underground installation) under natural light environment producing, the necessary condition of life activity.Under the condition that the movable space of the mankind is comparatively wide, because the illumination zone of single lighting device is limited, the multiple lighting devices needing number larger complete illumination support jointly.But due to the translational speed of the mankind limited (even if riding public transportation means), only need the support of partial illumination device in a period of time, the illumination support that in broad space, most lighting device provides has been wasted.Thus under the policy of " improving energy utilization rate; realize energy-saving and emission-reduction " promotes, how illumination apparatus realizes systematization, intelligentized cooperation control, improve lighting device utilization rate, extend lighting device useful life, reducing the energy consumption of whole system of throwing light on a large scale, is problem demanding prompt solution in the development of each urban modernization.
But, guarantee energy-saving and emission-reduction while, again can not with limit human being's production, life activity for cost, need to ensure have safe enough, real-time illumination support to the driver in pedestrian or the vehicles, passenger simultaneously.Current lighting device intelligent control system, achieve unmanned through out-of-date by lighting device Close All, can solve pedestrian and the vehicles by behind a certain position, corresponding lighting device just can be lighted.But the illumination support that this type systematic provides is changeless, the feature (size, moving direction, speed and acceleration) of the object supported that can not throw light on as required, regulates the brightness of illumination, direction and scope dynamically.Such system on the one hand, moving direction very fast to translational speed and the fast vehicles of acceleration change also exists potential safety hazard, and it can not provide the illumination support of effective dynamic tracing in real time, limits the visual field of driver thus causes danger; On the other hand, moving direction comparatively slow for translational speed and acceleration change pedestrian slowly, the control of this mechanical illumination apparatus still can cause the waste of major part illumination resource.In addition, the lighting device control system in (in open air, culvert, tunnel etc.) in current large space, all based on expensive transducer and FPGA, PLC mono-hardware development platform of class, every platform lighting device is caused to realize the cost of Based Intelligent Control very high, thus limit the intelligentized development of large space interior lighting device, be almost difficult to the application example seeing this kind of Based Intelligent Control product at present.
Because traditional image recognition algorithm computation complexity is high, therefore existing image recognition apparatus, or rely on high-performance, the high embedded hardware computing environment stored, or be transmitted through the network to cloud service and calculate.And the usual performance of common embedded device is lower or high cost, and the transmission volume of visual information is huge, and therefore, the software vision algorithm of low-cost and high-performance is most important to Intelligent lightening device system.
Traditional so-called lighting device image recognition is only just simple identifies mobile object and direction thereof, can not distinguish vehicle, pedestrian and pet or flying bird, pneumatic rubbish very well; Can not support to obtain the velocity information of vehicle and pedestrian, can not the two-way any amount lighting device in coordinated signals road surface, thus according to the actual requirements, light corresponding lighting device, ensure the safety of advancing of user; To the image that single camera obtains, carry out image recognition, the shadow of the trees easily by mistake will rocked on true road surface, is identified as the vehicles or pedestrians of movement.
Traditional multi-jumping wireless self-network grouping technology only can realize the measurement of information, transmission, relay forwarding and the unified of host computer and control, and the autonomy-oriented that cannot realize multiple node controls, Collaborative Control.Its reason is, in the network of a n node, if 1 node can to other n-1 node sending controling instruction, the control command that so every 1 node receives at synchronization is maximum reaches n-1, when the number of n is larger, (n-1) that transmit in real time in network 2individual instruction can cause data congestion and cause large-scale control lag, and simultaneously individual node can be lost even instruction interlocking and cause system in case of system halt because control command while too much causes system mode to switch frequent, part instructs.
Summary of the invention
For the problems referred to above, the present invention proposes a kind of illuminator based on Internet of Things visual capture and control method thereof.
Based on an illuminator for Internet of Things visual capture, connected to form by multiple lighting device, each lighting device comprises: the bionical visual capture unit of frogeye, Internet of Things coordination unit, lighting device control unit; The bionical visual capture unit of frogeye is connected with Internet of Things coordination unit, and Internet of Things coordination unit is connected with lighting device control unit; Multiple Internet of Things coordination unit is connected by wireless multihop ad hoc networks agreement, forms intelligent coordinated maincenter.
The bionical visual capture unit of described frogeye adopts embedded Linux or Android platform environment, by the intelligent coordinated maincenter that coupled Internet of Things coordination unit and multiple Internet of Things coordination unit are formed, define with the bionical visual capture unit of other frogeyes and possess the compound eye visual capture computing group of planes of frogeye to the recognition capability of dynamic object; Adopt the bionical vision of many frogeyes to work in coordination with and capture algorithm, regular mobile object is captured from fixed background and random mobile background, and extract the size of mobile object in visual field, moving direction, speed and acceleration information, information is passed to Internet of Things coordination unit in real time.
Described Internet of Things coordination unit is according to the result of calculation from the bionical visual capture unit of frogeye, instruction is sent to each lighting device control unit by the intelligent coordinated maincenter be made up of multiple Internet of Things coordination unit, to ensure that producing dynamic illumination on mobile object direct of travel supports effect, ensures that it has the reaction time of enough field ranges, brightness and chance emergency.
Described lighting device control unit adopts silicon-controlled voltage regulation voltage stabilizing circuit, the instruction that real-time response Internet of Things coordination unit sends, and regulates the brightness of single lighting device and lighting hours, and can to the fault of coordination center feedback lighting device.
Based on a control method for the illuminator of Internet of Things visual capture, comprising:
The camera angle of step 1, the bionical visual capture unit of adjustment frogeye, make it to move opposite direction, the parallel travel direction in vertical road surface, vertical travel direction, vertical road surface with near-end road surface target all in 30-70 degree angle, the lower-left making near-end road surface garage direction and camera capture picture overlaps to upper right diagonal;
Step 2, storehouse of increasing income based on opencv, ccv computer vision, environmentally build image procossing basic algorithm at the embedded Linux of the bionical visual capture unit of frogeye or Android platform and rely on storehouse, comprising: camera data acquisition and analysis program, background difference algorithm, Gaussian Background algorithm, median filtering algorithm, dilation erosion algorithm, down sample algorithm, upwards sampling algorithm;
The camera Real-time Collection video data of the bionical visual capture unit of step 3, frogeye, then background and the prospect of frame difference method, Gaussian Background algorithm differentiate between images is adopted, detect the mobile object in video, use the noise in burn into expansion, filtering algorithm filtering image subsequently;
The feature of step 4, the size extracting label target in the view data of the real-time video collected, profile, shape, color, motion track, traveling time;
Step 5, according to the feature of current mobile object obtained, calculated by Logic Regression Models that to be whether the identification of vehicles or pedestrians judge, and the confidence level of this judgement;
Step 6, the natural law near big and far smaller according to imaging, in conjunction with pixel and the angle of camera, by analyzing the motion track of mobile object, calculate speed and the direction of mobile object;
The bionical visual capture unit of step 7, frogeye is by calculating and judging target image translational speed and acceleration whether within reasonable value scope, and the data network set up by wireless sense network agreement and the bionical visual capture unit of other multiple frogeyes is captured and the target image characteristics data calculated carry out combining the signal filtering flying bird, mosquito, rubbish, the shadow of the trees, these interfering objects of pet, send to Internet of Things coordination unit after obtaining recognition result;
Step 8, Internet of Things coordination unit are according to the speed of identified effective target and direction, choose the maximum of this direction speed in all targets, and sending control signal to the party's one or more lighting device control units upwards, some the lamps controlled on this maximal rate moving direction light; If two-way road all has effective target to move, then road both sides lighting device according to its separately speed light the lighting device of correlated measure.
The method extracting label target feature in described step 4 is: mark the sample of vehicle and pedestrian in video as positive example, and extracts the feature of the size of label target, profile, shape, color, motion track, traveling time respectively; The natural law near big and far smaller according to imaging, suppose that the target area observed is S, the line of vehicle position and camera and the angle on road surface are A, the line of vehicle position and camera and the angle of road surface vertical direction are B, then the formula converting target sizes is S/ (sinA × sinB); Extract other images on the road surface of equivalent immediately as counter-example, use positive example as training sample, use artificial adjustment power or logistic regression, neural net, SVMs as regression algorithm model, trainable recognizer, identifies the vehicle in video and pedestrian.
Described regression algorithm model adopts logistic regression, and decision-making formula is:
h θ ( x ) = g ( θ T x ) = 1 1 + e - θ T x ,
Whether according to mark is that vehicles or pedestrians are as h θthe labeled data of (x), extract the size of label target, profile, shape, color, motion track, traveling time feature as the sample data of x, training pattern parameter θ, provides by this Logic Regression Models that to be whether the identification of vehicles or pedestrians judge.
The method calculating the speed of mobile object in described step 6 is: suppose that the actual measurement speed of vehicle is V, and the angle in vehicle position and camera line and direction, road surface is A, then specific speed is V × sinA.
Internet of Things coordination unit in described step 8 is based on wireless multihop ad hoc networks technology, the first normalized algorithm of many associations and other Internet of Things coordination units is adopted to form intelligent coordinated maincenter, the first normalized algorithm of many associations specifically comprises: each node in network no longer only realizes data and uploading of instruction is assigned, but the bionical visual capture unit of frogeye will be come from, the instruction of Internet of Things coordination unit carries out coordination normalized in the time window of 10ms, analyze the repeatability being about to the instruction sending to each node, and according to the requirement output packet of control range farthest containing 1 result of multiple node control information, then this result is sent to nearest node, and inform that this result also will continue to be sent to other nodes, in such guarantee 10ms, each node at most only will can receive 1 instruction, and send 1 instruction, and make the instruction simultaneously transmitted in network only have 2n at most, n is network node number, n belongs to positive integer.
The method of work of described Internet of Things coordination unit is:
After step 1, certain Internet of Things coordination unit starting up, keep energy-saving illumination state, enter 10ms and wait for window;
Step 2, judge whether other Internet of Things coordination units have instruction to arrive, judge whether the bionical visual capture unit of the frogeye of this lighting device has instruction, if the two judged result above has any one to be yes, then perform step 3, otherwise return and perform step 1;
The instruction of step 3, comprehensive multiple Internet of Things coordination unit, analyze the direction of required illumination, region and intensity, strength control instruction is sent to the device of illumination can be provided in this direction and region, judge this lighting device whether in required field of illumination, if then provide illumination by desirable strength, keep 3 ~ 5s according to lighting device actual interval, then keep energy-saving illumination state if not.
Beneficial effect of the present invention is: can the data that pass over of all node receives in the whole network of real-time analysis the bionical visual capture unit of each frogeye, judge the number of mobile object in current whole network coverage area, and each target to illumination support direction, brightness, scope real-time requirement, send real time control command to the lighting device control unit on each node; The distribution type control system that whole system is no longer made up of the bionical visual capture system of frogeye one by one, but " compound eye " visual capture centralized control system having multiple " frogeye " cooperation to form, namely the recognition capability of frogeye to dynamic object has been possessed, also possess cooperative information process and the coordinated signals ability of compound eye, thus achieve can in real time, dynamically for the mobile object in field range provide accurately, reliable, the illumination support that has safety guarantee; Can ensure that effect is supported in the dynamic illumination that mobile object can obtain on direct of travel in system visual field, ensure that it has the reaction time of enough field ranges, brightness and chance emergency, simultaneously for large-scale centralized lighting system, can save in visual field without energy resource consumption during mobile object.
Accompanying drawing explanation
Fig. 1 is a kind of illuminator schematic diagram based on Internet of Things visual capture of the present invention.
Fig. 2 is the method for work flow chart of Internet of Things coordination unit.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.
Based on an illuminator for Internet of Things visual capture, as shown in Figure 1, connected to form by multiple lighting device, each lighting device comprises: the bionical visual capture unit of frogeye, Internet of Things coordination unit, lighting device control unit; The bionical visual capture unit of frogeye is connected with Internet of Things coordination unit, and Internet of Things coordination unit is connected with lighting device control unit; Multiple Internet of Things coordination unit is connected by wireless multihop ad hoc networks agreement, forms intelligent coordinated maincenter.
The bionical visual capture unit of described frogeye adopts embedded Linux or Android platform environment, by the intelligent coordinated maincenter that coupled Internet of Things coordination unit and multiple Internet of Things coordination unit are formed, define with the bionical visual capture unit of other frogeyes and possess the compound eye visual capture computing group of planes of frogeye to the recognition capability of dynamic object; Adopt the bionical vision of many frogeyes to work in coordination with and capture algorithm, regular mobile object is captured from fixed background and random mobile background, and extract the size of mobile object in visual field, moving direction, speed and acceleration information, information is passed to Internet of Things coordination unit in real time.
Described Internet of Things coordination unit is according to the result of calculation from the bionical visual capture unit of frogeye, instruction is sent to each lighting device control unit by the intelligent coordinated maincenter be made up of multiple Internet of Things coordination unit, to ensure that producing dynamic illumination on mobile object direct of travel supports effect, ensures that it has the reaction time of enough field ranges, brightness and chance emergency.
Described lighting device control unit adopts silicon-controlled voltage regulation voltage stabilizing circuit, the instruction that real-time response Internet of Things coordination unit sends, and regulates the brightness of single lighting device and lighting hours, and can to the fault of coordination center feedback lighting device.
Based on a control method for the illuminator of Internet of Things visual capture, comprising:
The camera angle of step 1, the bionical visual capture unit of adjustment frogeye, make it to move opposite direction, the parallel travel direction in vertical road surface, vertical travel direction, vertical road surface with near-end road surface target all in 30-70 degree angle, the lower-left making near-end road surface garage direction and camera capture picture overlaps to upper right diagonal;
Step 2, storehouse of increasing income based on opencv, ccv computer vision, environmentally build image procossing basic algorithm at the embedded Linux of the bionical visual capture unit of frogeye or Android platform and rely on storehouse, comprising: camera data acquisition and analysis program, background difference algorithm, Gaussian Background algorithm, median filtering algorithm, dilation erosion algorithm, down sample algorithm, upwards sampling algorithm;
The camera Real-time Collection video data of the bionical visual capture unit of step 3, frogeye, then background and the prospect of frame difference method, Gaussian Background algorithm differentiate between images is adopted, detect the mobile object in video, use the noise in burn into expansion, filtering algorithm filtering image subsequently;
The feature of step 4, the size extracting label target in the view data of the real-time video collected, profile, shape, color, motion track, traveling time;
Step 5, according to the feature of current mobile object obtained, calculated by Logic Regression Models that to be whether the identification of vehicles or pedestrians judge, and the confidence level of this judgement;
Step 6, the natural law near big and far smaller according to imaging, in conjunction with pixel and the angle of camera, by analyzing the motion track of mobile object, calculate speed and the direction of mobile object;
The bionical visual capture unit of step 7, frogeye is by calculating and judging target image translational speed and acceleration whether within reasonable value scope, and the data network set up by wireless sense network agreement and the bionical visual capture unit of other multiple frogeyes is captured and the target image characteristics data calculated carry out combining the signal filtering flying bird, mosquito, rubbish, the shadow of the trees, these interfering objects of pet, send to Internet of Things coordination unit after obtaining recognition result;
Step 8, Internet of Things coordination unit are according to the speed of identified effective target and direction, choose the maximum of this direction speed in all targets, and sending control signal to the party's one or more lighting device control units upwards, some the lamps controlled on this maximal rate moving direction light; If two-way road all has effective target to move, then road both sides lighting device according to its separately speed light the lighting device of correlated measure.
The method extracting label target feature in described step 4 is: mark the sample of vehicle and pedestrian in video as positive example, and extracts the feature of the size of label target, profile, shape, color, motion track, traveling time respectively; The natural law near big and far smaller according to imaging, suppose that the target area observed is S, the line of vehicle position and camera and the angle on road surface are A, the line of vehicle position and camera and the angle of road surface vertical direction are B, then the formula converting target sizes is S/ (sinA × sinB); Extract other images on the road surface of equivalent immediately as counter-example, use positive example as training sample, use artificial adjustment power or logistic regression, neural net, SVMs as regression algorithm model, trainable recognizer, identifies the vehicle in video and pedestrian.
Described regression algorithm model adopts logistic regression, and decision-making formula is:
h θ ( x ) = g ( θ T x ) = 1 1 + e - θ T x ,
Whether according to mark is that vehicles or pedestrians are as h θthe labeled data of (x), extract the size of label target, profile, shape, color, motion track, traveling time feature as the sample data of x, training pattern parameter θ, provides by this Logic Regression Models that to be whether the identification of vehicles or pedestrians judge.
The method calculating the speed of mobile object in described step 6 is: suppose that the actual measurement speed of vehicle is V, and the angle in vehicle position and camera line and direction, road surface is A, then specific speed is V × sinA.
Internet of Things coordination unit in described step 8 is based on wireless multihop ad hoc networks technology, the first normalized algorithm of many associations and other Internet of Things coordination units is adopted to form intelligent coordinated maincenter, the first normalized algorithm of many associations specifically comprises: each node in network no longer only realizes data and uploading of instruction is assigned, but the bionical visual capture unit of frogeye will be come from, the instruction of Internet of Things coordination unit carries out coordination normalized in the time window of 10ms, analyze the repeatability being about to the instruction sending to each node, and according to the requirement output packet of control range farthest containing 1 result of multiple node control information, then this result is sent to nearest node, and inform that this result also will continue to be sent to other nodes, in such guarantee 10ms, each node at most only will can receive 1 instruction, and send 1 instruction, and make the instruction simultaneously transmitted in network only have 2n at most, n is network node number, n belongs to positive integer.
The method of work of described Internet of Things coordination unit, as shown in Figure 2, specifically comprises:
After step 1, certain Internet of Things coordination unit starting up, keep energy-saving illumination state, enter 10ms and wait for window;
Step 2, judge whether other Internet of Things coordination units have instruction to arrive, judge whether the bionical visual capture unit of the frogeye of this lighting device has instruction, if the two judged result above has any one to be yes, then perform step 3, otherwise return and perform step 1;
The instruction of step 3, comprehensive multiple Internet of Things coordination unit, analyze the direction of required illumination, region and intensity, strength control instruction is sent to the device of illumination can be provided in this direction and region, judge this lighting device whether in required field of illumination, if then provide illumination by desirable strength, keep 3 ~ 5s according to lighting device actual interval, then keep energy-saving illumination state if not.
The above; be only the present invention's preferably embodiment, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection range of claim.

Claims (10)

1. based on an illuminator for Internet of Things visual capture, it is characterized in that, connected to form by multiple lighting device, each lighting device comprises: the bionical visual capture unit of frogeye, Internet of Things coordination unit, lighting device control unit; The bionical visual capture unit of frogeye is connected with Internet of Things coordination unit, and Internet of Things coordination unit is connected with lighting device control unit; Multiple Internet of Things coordination unit is connected by wireless multihop ad hoc networks agreement, forms intelligent coordinated maincenter.
2. system according to claim 1, it is characterized in that, the bionical visual capture unit of described frogeye adopts embedded Linux or Android platform environment, by the intelligent coordinated maincenter that coupled Internet of Things coordination unit and multiple Internet of Things coordination unit are formed, define with the bionical visual capture unit of other frogeyes and possess the compound eye visual capture computing group of planes of frogeye to the recognition capability of dynamic object; Adopt the bionical vision of many frogeyes to work in coordination with and capture algorithm, regular mobile object is captured from fixed background and random mobile background, and extract the size of mobile object in visual field, moving direction, speed and acceleration information, information is passed to Internet of Things coordination unit in real time.
3. system according to claim 1, it is characterized in that, described Internet of Things coordination unit is according to the result of calculation from the bionical visual capture unit of frogeye, instruction is sent to each lighting device control unit by the intelligent coordinated maincenter be made up of multiple Internet of Things coordination unit, to ensure that producing dynamic illumination on mobile object direct of travel supports effect, ensures that it has the reaction time of enough field ranges, brightness and chance emergency.
4. system according to claim 1, it is characterized in that, described lighting device control unit adopts silicon-controlled voltage regulation voltage stabilizing circuit, the instruction that real-time response Internet of Things coordination unit sends, the brightness of single lighting device and lighting hours are regulated, and can to the fault of coordination center feedback lighting device.
5. a control method for system according to claim 1, is characterized in that, comprising:
The camera angle of step 1, the bionical visual capture unit of adjustment frogeye, make it to move opposite direction, the parallel travel direction in vertical road surface, vertical travel direction, vertical road surface with near-end road surface target all in 30-70 degree angle, the lower-left making near-end road surface garage direction and camera capture picture overlaps to upper right diagonal;
Step 2, storehouse of increasing income based on opencv, ccv computer vision, environmentally build image procossing basic algorithm at the embedded Linux of the bionical visual capture unit of frogeye or Android platform and rely on storehouse, comprising: camera data acquisition and analysis program, background difference algorithm, Gaussian Background algorithm, median filtering algorithm, dilation erosion algorithm, down sample algorithm, upwards sampling algorithm;
The camera Real-time Collection video data of the bionical visual capture unit of step 3, frogeye, then background and the prospect of frame difference method, Gaussian Background algorithm differentiate between images is adopted, detect the mobile object in video, use the noise in burn into expansion, filtering algorithm filtering image subsequently;
The feature of step 4, the size extracting label target in the view data of the real-time video collected, profile, shape, color, motion track, traveling time;
Step 5, according to the feature of current mobile object obtained, calculated by Logic Regression Models that to be whether the identification of vehicles or pedestrians judge, and the confidence level of this judgement;
Step 6, the natural law near big and far smaller according to imaging, in conjunction with pixel and the angle of camera, by analyzing the motion track of mobile object, calculate speed and the direction of mobile object;
The bionical visual capture unit of step 7, frogeye is by calculating and judging target image translational speed and acceleration whether within reasonable value scope, and the data network set up by wireless sense network agreement and the bionical visual capture unit of other multiple frogeyes is captured and the target image characteristics data calculated carry out combining the signal filtering flying bird, mosquito, rubbish, the shadow of the trees, these interfering objects of pet, send to Internet of Things coordination unit after obtaining recognition result;
Step 8, Internet of Things coordination unit are according to the speed of identified effective target and direction, choose the maximum of this direction speed in all targets, and sending control signal to the party's one or more lighting device control units upwards, some the lamps controlled on this maximal rate moving direction light; If two-way road all has effective target to move, then road both sides lighting device according to its separately speed light the lighting device of correlated measure.
6. method according to claim 5, it is characterized in that, the method extracting label target feature in described step 4 is: mark the sample of vehicle and pedestrian in video as positive example, and extracts the feature of the size of label target, profile, shape, color, motion track, traveling time respectively; The natural law near big and far smaller according to imaging, suppose that the target area observed is S, the line of vehicle position and camera and the angle on road surface are A, the line of vehicle position and camera and the angle of road surface vertical direction are B, then the formula converting target sizes is S/ (sinA × sinB); Extract other images on the road surface of equivalent immediately as counter-example, use positive example as training sample, use artificial adjustment power or logistic regression, neural net, SVMs as regression algorithm model, trainable recognizer, identifies the vehicle in video and pedestrian.
7. method according to claim 6, is characterized in that, described regression algorithm model adopts logistic regression, and decision-making formula is:
h θ ( x ) = g ( θ T x ) = 1 1 + e - θ T x ,
Whether according to mark is that vehicles or pedestrians are as h θthe labeled data of (x), extract the size of label target, profile, shape, color, motion track, traveling time feature as the sample data of x, training pattern parameter θ, provides by this Logic Regression Models that to be whether the identification of vehicles or pedestrians judge.
8. method according to claim 5, it is characterized in that, the method calculating the speed of mobile object in described step 6 is: suppose that the actual measurement speed of vehicle is V, and the angle in vehicle position and camera line and direction, road surface is A, then specific speed is V × sinA.
9. method according to claim 5, it is characterized in that, Internet of Things coordination unit in described step 8 is based on wireless multihop ad hoc networks technology, the first normalized algorithm of many associations and other Internet of Things coordination units is adopted to form intelligent coordinated maincenter, the first normalized algorithm of many associations specifically comprises: each node in network no longer only realizes data and uploading of instruction is assigned, but the bionical visual capture unit of frogeye will be come from, the instruction of Internet of Things coordination unit carries out coordination normalized in the time window of 10ms, analyze the repeatability being about to the instruction sending to each node, and according to the requirement output packet of control range farthest containing 1 result of multiple node control information, then this result is sent to nearest node, and inform that this result also will continue to be sent to other nodes, in such guarantee 10ms, each node at most only will can receive 1 instruction, and send 1 instruction, and make the instruction simultaneously transmitted in network only have 2n at most, n is network node number, n belongs to positive integer.
10. method according to claim 5, is characterized in that, the method for work of described Internet of Things coordination unit is:
After step 1, certain Internet of Things coordination unit starting up, keep energy-saving illumination state, enter 10ms and wait for window;
Step 2, judge whether other Internet of Things coordination units have instruction to arrive, judge whether the bionical visual capture unit of the frogeye of this lighting device has instruction, if the two judged result above has any one to be yes, then perform step 3, otherwise return and perform step 1;
The instruction of step 3, comprehensive multiple Internet of Things coordination unit, analyze the direction of required illumination, region and intensity, strength control instruction is sent to the device of illumination can be provided in this direction and region, judge this lighting device whether in required field of illumination, if then provide illumination by desirable strength, keep 3 ~ 5s according to lighting device actual interval, then keep energy-saving illumination state if not.
CN201510018838.8A 2015-01-14 2015-01-14 Vision capture illuminating system based on internet of things and control method thereof Active CN104582186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510018838.8A CN104582186B (en) 2015-01-14 2015-01-14 Vision capture illuminating system based on internet of things and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510018838.8A CN104582186B (en) 2015-01-14 2015-01-14 Vision capture illuminating system based on internet of things and control method thereof

Publications (2)

Publication Number Publication Date
CN104582186A true CN104582186A (en) 2015-04-29
CN104582186B CN104582186B (en) 2017-01-25

Family

ID=53097099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510018838.8A Active CN104582186B (en) 2015-01-14 2015-01-14 Vision capture illuminating system based on internet of things and control method thereof

Country Status (1)

Country Link
CN (1) CN104582186B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920135A (en) * 2015-12-28 2017-07-04 航天信息股份有限公司 A kind of realization method and system of POS billing servers off-line trading
CN108135064A (en) * 2017-12-22 2018-06-08 宁波奇巧电器科技有限公司 Identify the automatic sensing security protection searchlight and its control circuit of people and animals
CN111742620A (en) * 2018-02-26 2020-10-02 昕诺飞控股有限公司 Restarting dynamic light effects according to effect type and/or user preference
CN112532953A (en) * 2020-12-23 2021-03-19 深圳市朝阳辉电气设备有限公司 Data processing method and system for intelligent city road lighting control

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201976285U (en) * 2010-11-30 2011-09-14 帖涛 Classroom energy-saving device based on computer vision
KR20110113726A (en) * 2011-10-04 2011-10-18 삼성엘이디 주식회사 Wireless lighting status sensing apparatus
CN102752911A (en) * 2011-04-20 2012-10-24 松下电器产业株式会社 Illumination system and illumination apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201976285U (en) * 2010-11-30 2011-09-14 帖涛 Classroom energy-saving device based on computer vision
CN102752911A (en) * 2011-04-20 2012-10-24 松下电器产业株式会社 Illumination system and illumination apparatus
KR20110113726A (en) * 2011-10-04 2011-10-18 삼성엘이디 주식회사 Wireless lighting status sensing apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
浦敏 等: "基于物联网的无线照明控制***", 《照明工程学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920135A (en) * 2015-12-28 2017-07-04 航天信息股份有限公司 A kind of realization method and system of POS billing servers off-line trading
CN108135064A (en) * 2017-12-22 2018-06-08 宁波奇巧电器科技有限公司 Identify the automatic sensing security protection searchlight and its control circuit of people and animals
CN111742620A (en) * 2018-02-26 2020-10-02 昕诺飞控股有限公司 Restarting dynamic light effects according to effect type and/or user preference
CN112532953A (en) * 2020-12-23 2021-03-19 深圳市朝阳辉电气设备有限公司 Data processing method and system for intelligent city road lighting control
CN112532953B (en) * 2020-12-23 2021-07-06 深圳市朝阳辉电气设备有限公司 Data processing method and system for intelligent city road lighting control

Also Published As

Publication number Publication date
CN104582186B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN103714698B (en) Public transit vehicle passenger flow volume statistical system based on range image and method
CN103702485B (en) A kind of intelligent video induction LED roadway lighting system
CN102837658B (en) Intelligent vehicle multi-laser-radar data integration system and method thereof
CN102638013B (en) The target image identification power transmission state monitoring system of view-based access control model attention mechanism
CN104582186A (en) Vision capture illuminating system based on internet of things and control method thereof
CN107993456B (en) Single-way intelligent traffic light control system and method based on end-of-traffic of sidewalk
CN105262987A (en) Road full-covering panoramic intelligent video monitoring system and method based on Internet of Things
CN104039058B (en) Street lamp control system and method
CN204539531U (en) A kind of wisdom street lamp control system
CN104301377A (en) City cloud based intelligent street lamp and interconnection and interworking control system
CN102665365A (en) Coming vehicle video detection based streetlamp control and management system
CN106781554A (en) Intelligent traffic signal control system
CN204943247U (en) A kind of Intelligent LED lighting street lamp
CN205670385U (en) Urban traffic control device based on mobile phone wireless net
CN109816996A (en) Intelligent traffic light control system based on wireless sensor network
WO2016026073A1 (en) City cloud-based third-generation intelligent street lamp and interconnection and interworking control system
CN208781404U (en) A kind of traffic light control system
CN105809987B (en) It is a kind of based on the wind light mutual complementing formula intelligent traffic light control system more acted on behalf of
CN103117000A (en) Road condition monitoring device
CN102665363B (en) Coming vehicle video detection based streetlamp control device and coming vehicle video detection based streetlamp control method
CN201986239U (en) Control system of city elevated intelligent road lamps
CN204010330U (en) Traffic intersection information release terminal
CN116528437B (en) Intelligent lighting networking linkage control method for indoor parking lot
CN203706431U (en) Sensor network system for monitoring and controlling traffic flow
CN109360431A (en) A kind of self-adapting traffic signal light control algolithm and system based on speed monitoring

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170329

Address after: 650051 Yunnan, Kunming Panlong District, Beichen Road, Beichen District, building 3, unit 602, 34

Patentee after: Yang Hongyu

Address before: The main building block 1427B No. 2 D 102206 Beijing City, Changping District Small Town Zhu Daxinzhuang North Road

Patentee before: BEIJING DANPU FAMO IOT TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240206

Address after: Room 602, Unit 3, Building 34, Beichen Community, Beichen Middle Road, Panlong District, Kunming City, Yunnan Province, 65021

Patentee after: Huahuan (Yunnan) Technology Co.,Ltd.

Country or region after: China

Address before: Room 602, Unit 3, Building 34, Beichen Community, Beichen Middle Road, Panlong District, Kunming City, Yunnan Province, 650051

Patentee before: Yang Hongyu

Country or region before: China

TR01 Transfer of patent right