A kind of illuminator based on Internet of Things visual capture and control method thereof
Technical field
The invention belongs to dynamic lighting technical field, particularly relate to a kind of illuminator based on Internet of Things visual capture and control method thereof.
Background technology
Illumination support is the mankind carrying out without (as night, underground installation) under natural light environment producing, the necessary condition of life activity.Under the condition that the movable space of the mankind is comparatively wide, because the illumination zone of single lighting device is limited, the multiple lighting devices needing number larger complete illumination support jointly.But due to the translational speed of the mankind limited (even if riding public transportation means), only need the support of partial illumination device in a period of time, the illumination support that in broad space, most lighting device provides has been wasted.Thus under the policy of " improving energy utilization rate; realize energy-saving and emission-reduction " promotes, how illumination apparatus realizes systematization, intelligentized cooperation control, improve lighting device utilization rate, extend lighting device useful life, reducing the energy consumption of whole system of throwing light on a large scale, is problem demanding prompt solution in the development of each urban modernization.
But, guarantee energy-saving and emission-reduction while, again can not with limit human being's production, life activity for cost, need to ensure have safe enough, real-time illumination support to the driver in pedestrian or the vehicles, passenger simultaneously.Current lighting device intelligent control system, achieve unmanned through out-of-date by lighting device Close All, can solve pedestrian and the vehicles by behind a certain position, corresponding lighting device just can be lighted.But the illumination support that this type systematic provides is changeless, the feature (size, moving direction, speed and acceleration) of the object supported that can not throw light on as required, regulates the brightness of illumination, direction and scope dynamically.Such system on the one hand, moving direction very fast to translational speed and the fast vehicles of acceleration change also exists potential safety hazard, and it can not provide the illumination support of effective dynamic tracing in real time, limits the visual field of driver thus causes danger; On the other hand, moving direction comparatively slow for translational speed and acceleration change pedestrian slowly, the control of this mechanical illumination apparatus still can cause the waste of major part illumination resource.In addition, the lighting device control system in (in open air, culvert, tunnel etc.) in current large space, all based on expensive transducer and FPGA, PLC mono-hardware development platform of class, every platform lighting device is caused to realize the cost of Based Intelligent Control very high, thus limit the intelligentized development of large space interior lighting device, be almost difficult to the application example seeing this kind of Based Intelligent Control product at present.
Because traditional image recognition algorithm computation complexity is high, therefore existing image recognition apparatus, or rely on high-performance, the high embedded hardware computing environment stored, or be transmitted through the network to cloud service and calculate.And the usual performance of common embedded device is lower or high cost, and the transmission volume of visual information is huge, and therefore, the software vision algorithm of low-cost and high-performance is most important to Intelligent lightening device system.
Traditional so-called lighting device image recognition is only just simple identifies mobile object and direction thereof, can not distinguish vehicle, pedestrian and pet or flying bird, pneumatic rubbish very well; Can not support to obtain the velocity information of vehicle and pedestrian, can not the two-way any amount lighting device in coordinated signals road surface, thus according to the actual requirements, light corresponding lighting device, ensure the safety of advancing of user; To the image that single camera obtains, carry out image recognition, the shadow of the trees easily by mistake will rocked on true road surface, is identified as the vehicles or pedestrians of movement.
Traditional multi-jumping wireless self-network grouping technology only can realize the measurement of information, transmission, relay forwarding and the unified of host computer and control, and the autonomy-oriented that cannot realize multiple node controls, Collaborative Control.Its reason is, in the network of a n node, if 1 node can to other n-1 node sending controling instruction, the control command that so every 1 node receives at synchronization is maximum reaches n-1, when the number of n is larger, (n-1) that transmit in real time in network
2individual instruction can cause data congestion and cause large-scale control lag, and simultaneously individual node can be lost even instruction interlocking and cause system in case of system halt because control command while too much causes system mode to switch frequent, part instructs.
Summary of the invention
For the problems referred to above, the present invention proposes a kind of illuminator based on Internet of Things visual capture and control method thereof.
Based on an illuminator for Internet of Things visual capture, connected to form by multiple lighting device, each lighting device comprises: the bionical visual capture unit of frogeye, Internet of Things coordination unit, lighting device control unit; The bionical visual capture unit of frogeye is connected with Internet of Things coordination unit, and Internet of Things coordination unit is connected with lighting device control unit; Multiple Internet of Things coordination unit is connected by wireless multihop ad hoc networks agreement, forms intelligent coordinated maincenter.
The bionical visual capture unit of described frogeye adopts embedded Linux or Android platform environment, by the intelligent coordinated maincenter that coupled Internet of Things coordination unit and multiple Internet of Things coordination unit are formed, define with the bionical visual capture unit of other frogeyes and possess the compound eye visual capture computing group of planes of frogeye to the recognition capability of dynamic object; Adopt the bionical vision of many frogeyes to work in coordination with and capture algorithm, regular mobile object is captured from fixed background and random mobile background, and extract the size of mobile object in visual field, moving direction, speed and acceleration information, information is passed to Internet of Things coordination unit in real time.
Described Internet of Things coordination unit is according to the result of calculation from the bionical visual capture unit of frogeye, instruction is sent to each lighting device control unit by the intelligent coordinated maincenter be made up of multiple Internet of Things coordination unit, to ensure that producing dynamic illumination on mobile object direct of travel supports effect, ensures that it has the reaction time of enough field ranges, brightness and chance emergency.
Described lighting device control unit adopts silicon-controlled voltage regulation voltage stabilizing circuit, the instruction that real-time response Internet of Things coordination unit sends, and regulates the brightness of single lighting device and lighting hours, and can to the fault of coordination center feedback lighting device.
Based on a control method for the illuminator of Internet of Things visual capture, comprising:
The camera angle of step 1, the bionical visual capture unit of adjustment frogeye, make it to move opposite direction, the parallel travel direction in vertical road surface, vertical travel direction, vertical road surface with near-end road surface target all in 30-70 degree angle, the lower-left making near-end road surface garage direction and camera capture picture overlaps to upper right diagonal;
Step 2, storehouse of increasing income based on opencv, ccv computer vision, environmentally build image procossing basic algorithm at the embedded Linux of the bionical visual capture unit of frogeye or Android platform and rely on storehouse, comprising: camera data acquisition and analysis program, background difference algorithm, Gaussian Background algorithm, median filtering algorithm, dilation erosion algorithm, down sample algorithm, upwards sampling algorithm;
The camera Real-time Collection video data of the bionical visual capture unit of step 3, frogeye, then background and the prospect of frame difference method, Gaussian Background algorithm differentiate between images is adopted, detect the mobile object in video, use the noise in burn into expansion, filtering algorithm filtering image subsequently;
The feature of step 4, the size extracting label target in the view data of the real-time video collected, profile, shape, color, motion track, traveling time;
Step 5, according to the feature of current mobile object obtained, calculated by Logic Regression Models that to be whether the identification of vehicles or pedestrians judge, and the confidence level of this judgement;
Step 6, the natural law near big and far smaller according to imaging, in conjunction with pixel and the angle of camera, by analyzing the motion track of mobile object, calculate speed and the direction of mobile object;
The bionical visual capture unit of step 7, frogeye is by calculating and judging target image translational speed and acceleration whether within reasonable value scope, and the data network set up by wireless sense network agreement and the bionical visual capture unit of other multiple frogeyes is captured and the target image characteristics data calculated carry out combining the signal filtering flying bird, mosquito, rubbish, the shadow of the trees, these interfering objects of pet, send to Internet of Things coordination unit after obtaining recognition result;
Step 8, Internet of Things coordination unit are according to the speed of identified effective target and direction, choose the maximum of this direction speed in all targets, and sending control signal to the party's one or more lighting device control units upwards, some the lamps controlled on this maximal rate moving direction light; If two-way road all has effective target to move, then road both sides lighting device according to its separately speed light the lighting device of correlated measure.
The method extracting label target feature in described step 4 is: mark the sample of vehicle and pedestrian in video as positive example, and extracts the feature of the size of label target, profile, shape, color, motion track, traveling time respectively; The natural law near big and far smaller according to imaging, suppose that the target area observed is S, the line of vehicle position and camera and the angle on road surface are A, the line of vehicle position and camera and the angle of road surface vertical direction are B, then the formula converting target sizes is S/ (sinA × sinB); Extract other images on the road surface of equivalent immediately as counter-example, use positive example as training sample, use artificial adjustment power or logistic regression, neural net, SVMs as regression algorithm model, trainable recognizer, identifies the vehicle in video and pedestrian.
Described regression algorithm model adopts logistic regression, and decision-making formula is:
Whether according to mark is that vehicles or pedestrians are as h
θthe labeled data of (x), extract the size of label target, profile, shape, color, motion track, traveling time feature as the sample data of x, training pattern parameter θ, provides by this Logic Regression Models that to be whether the identification of vehicles or pedestrians judge.
The method calculating the speed of mobile object in described step 6 is: suppose that the actual measurement speed of vehicle is V, and the angle in vehicle position and camera line and direction, road surface is A, then specific speed is V × sinA.
Internet of Things coordination unit in described step 8 is based on wireless multihop ad hoc networks technology, the first normalized algorithm of many associations and other Internet of Things coordination units is adopted to form intelligent coordinated maincenter, the first normalized algorithm of many associations specifically comprises: each node in network no longer only realizes data and uploading of instruction is assigned, but the bionical visual capture unit of frogeye will be come from, the instruction of Internet of Things coordination unit carries out coordination normalized in the time window of 10ms, analyze the repeatability being about to the instruction sending to each node, and according to the requirement output packet of control range farthest containing 1 result of multiple node control information, then this result is sent to nearest node, and inform that this result also will continue to be sent to other nodes, in such guarantee 10ms, each node at most only will can receive 1 instruction, and send 1 instruction, and make the instruction simultaneously transmitted in network only have 2n at most, n is network node number, n belongs to positive integer.
The method of work of described Internet of Things coordination unit is:
After step 1, certain Internet of Things coordination unit starting up, keep energy-saving illumination state, enter 10ms and wait for window;
Step 2, judge whether other Internet of Things coordination units have instruction to arrive, judge whether the bionical visual capture unit of the frogeye of this lighting device has instruction, if the two judged result above has any one to be yes, then perform step 3, otherwise return and perform step 1;
The instruction of step 3, comprehensive multiple Internet of Things coordination unit, analyze the direction of required illumination, region and intensity, strength control instruction is sent to the device of illumination can be provided in this direction and region, judge this lighting device whether in required field of illumination, if then provide illumination by desirable strength, keep 3 ~ 5s according to lighting device actual interval, then keep energy-saving illumination state if not.
Beneficial effect of the present invention is: can the data that pass over of all node receives in the whole network of real-time analysis the bionical visual capture unit of each frogeye, judge the number of mobile object in current whole network coverage area, and each target to illumination support direction, brightness, scope real-time requirement, send real time control command to the lighting device control unit on each node; The distribution type control system that whole system is no longer made up of the bionical visual capture system of frogeye one by one, but " compound eye " visual capture centralized control system having multiple " frogeye " cooperation to form, namely the recognition capability of frogeye to dynamic object has been possessed, also possess cooperative information process and the coordinated signals ability of compound eye, thus achieve can in real time, dynamically for the mobile object in field range provide accurately, reliable, the illumination support that has safety guarantee; Can ensure that effect is supported in the dynamic illumination that mobile object can obtain on direct of travel in system visual field, ensure that it has the reaction time of enough field ranges, brightness and chance emergency, simultaneously for large-scale centralized lighting system, can save in visual field without energy resource consumption during mobile object.
Accompanying drawing explanation
Fig. 1 is a kind of illuminator schematic diagram based on Internet of Things visual capture of the present invention.
Fig. 2 is the method for work flow chart of Internet of Things coordination unit.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.
Based on an illuminator for Internet of Things visual capture, as shown in Figure 1, connected to form by multiple lighting device, each lighting device comprises: the bionical visual capture unit of frogeye, Internet of Things coordination unit, lighting device control unit; The bionical visual capture unit of frogeye is connected with Internet of Things coordination unit, and Internet of Things coordination unit is connected with lighting device control unit; Multiple Internet of Things coordination unit is connected by wireless multihop ad hoc networks agreement, forms intelligent coordinated maincenter.
The bionical visual capture unit of described frogeye adopts embedded Linux or Android platform environment, by the intelligent coordinated maincenter that coupled Internet of Things coordination unit and multiple Internet of Things coordination unit are formed, define with the bionical visual capture unit of other frogeyes and possess the compound eye visual capture computing group of planes of frogeye to the recognition capability of dynamic object; Adopt the bionical vision of many frogeyes to work in coordination with and capture algorithm, regular mobile object is captured from fixed background and random mobile background, and extract the size of mobile object in visual field, moving direction, speed and acceleration information, information is passed to Internet of Things coordination unit in real time.
Described Internet of Things coordination unit is according to the result of calculation from the bionical visual capture unit of frogeye, instruction is sent to each lighting device control unit by the intelligent coordinated maincenter be made up of multiple Internet of Things coordination unit, to ensure that producing dynamic illumination on mobile object direct of travel supports effect, ensures that it has the reaction time of enough field ranges, brightness and chance emergency.
Described lighting device control unit adopts silicon-controlled voltage regulation voltage stabilizing circuit, the instruction that real-time response Internet of Things coordination unit sends, and regulates the brightness of single lighting device and lighting hours, and can to the fault of coordination center feedback lighting device.
Based on a control method for the illuminator of Internet of Things visual capture, comprising:
The camera angle of step 1, the bionical visual capture unit of adjustment frogeye, make it to move opposite direction, the parallel travel direction in vertical road surface, vertical travel direction, vertical road surface with near-end road surface target all in 30-70 degree angle, the lower-left making near-end road surface garage direction and camera capture picture overlaps to upper right diagonal;
Step 2, storehouse of increasing income based on opencv, ccv computer vision, environmentally build image procossing basic algorithm at the embedded Linux of the bionical visual capture unit of frogeye or Android platform and rely on storehouse, comprising: camera data acquisition and analysis program, background difference algorithm, Gaussian Background algorithm, median filtering algorithm, dilation erosion algorithm, down sample algorithm, upwards sampling algorithm;
The camera Real-time Collection video data of the bionical visual capture unit of step 3, frogeye, then background and the prospect of frame difference method, Gaussian Background algorithm differentiate between images is adopted, detect the mobile object in video, use the noise in burn into expansion, filtering algorithm filtering image subsequently;
The feature of step 4, the size extracting label target in the view data of the real-time video collected, profile, shape, color, motion track, traveling time;
Step 5, according to the feature of current mobile object obtained, calculated by Logic Regression Models that to be whether the identification of vehicles or pedestrians judge, and the confidence level of this judgement;
Step 6, the natural law near big and far smaller according to imaging, in conjunction with pixel and the angle of camera, by analyzing the motion track of mobile object, calculate speed and the direction of mobile object;
The bionical visual capture unit of step 7, frogeye is by calculating and judging target image translational speed and acceleration whether within reasonable value scope, and the data network set up by wireless sense network agreement and the bionical visual capture unit of other multiple frogeyes is captured and the target image characteristics data calculated carry out combining the signal filtering flying bird, mosquito, rubbish, the shadow of the trees, these interfering objects of pet, send to Internet of Things coordination unit after obtaining recognition result;
Step 8, Internet of Things coordination unit are according to the speed of identified effective target and direction, choose the maximum of this direction speed in all targets, and sending control signal to the party's one or more lighting device control units upwards, some the lamps controlled on this maximal rate moving direction light; If two-way road all has effective target to move, then road both sides lighting device according to its separately speed light the lighting device of correlated measure.
The method extracting label target feature in described step 4 is: mark the sample of vehicle and pedestrian in video as positive example, and extracts the feature of the size of label target, profile, shape, color, motion track, traveling time respectively; The natural law near big and far smaller according to imaging, suppose that the target area observed is S, the line of vehicle position and camera and the angle on road surface are A, the line of vehicle position and camera and the angle of road surface vertical direction are B, then the formula converting target sizes is S/ (sinA × sinB); Extract other images on the road surface of equivalent immediately as counter-example, use positive example as training sample, use artificial adjustment power or logistic regression, neural net, SVMs as regression algorithm model, trainable recognizer, identifies the vehicle in video and pedestrian.
Described regression algorithm model adopts logistic regression, and decision-making formula is:
Whether according to mark is that vehicles or pedestrians are as h
θthe labeled data of (x), extract the size of label target, profile, shape, color, motion track, traveling time feature as the sample data of x, training pattern parameter θ, provides by this Logic Regression Models that to be whether the identification of vehicles or pedestrians judge.
The method calculating the speed of mobile object in described step 6 is: suppose that the actual measurement speed of vehicle is V, and the angle in vehicle position and camera line and direction, road surface is A, then specific speed is V × sinA.
Internet of Things coordination unit in described step 8 is based on wireless multihop ad hoc networks technology, the first normalized algorithm of many associations and other Internet of Things coordination units is adopted to form intelligent coordinated maincenter, the first normalized algorithm of many associations specifically comprises: each node in network no longer only realizes data and uploading of instruction is assigned, but the bionical visual capture unit of frogeye will be come from, the instruction of Internet of Things coordination unit carries out coordination normalized in the time window of 10ms, analyze the repeatability being about to the instruction sending to each node, and according to the requirement output packet of control range farthest containing 1 result of multiple node control information, then this result is sent to nearest node, and inform that this result also will continue to be sent to other nodes, in such guarantee 10ms, each node at most only will can receive 1 instruction, and send 1 instruction, and make the instruction simultaneously transmitted in network only have 2n at most, n is network node number, n belongs to positive integer.
The method of work of described Internet of Things coordination unit, as shown in Figure 2, specifically comprises:
After step 1, certain Internet of Things coordination unit starting up, keep energy-saving illumination state, enter 10ms and wait for window;
Step 2, judge whether other Internet of Things coordination units have instruction to arrive, judge whether the bionical visual capture unit of the frogeye of this lighting device has instruction, if the two judged result above has any one to be yes, then perform step 3, otherwise return and perform step 1;
The instruction of step 3, comprehensive multiple Internet of Things coordination unit, analyze the direction of required illumination, region and intensity, strength control instruction is sent to the device of illumination can be provided in this direction and region, judge this lighting device whether in required field of illumination, if then provide illumination by desirable strength, keep 3 ~ 5s according to lighting device actual interval, then keep energy-saving illumination state if not.
The above; be only the present invention's preferably embodiment, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection range of claim.