CN103712560A - Part detection method, system and device based on information fusion of multiple sensors - Google Patents

Part detection method, system and device based on information fusion of multiple sensors Download PDF

Info

Publication number
CN103712560A
CN103712560A CN201310733416.XA CN201310733416A CN103712560A CN 103712560 A CN103712560 A CN 103712560A CN 201310733416 A CN201310733416 A CN 201310733416A CN 103712560 A CN103712560 A CN 103712560A
Authority
CN
China
Prior art keywords
gray
image
scale value
multiple image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310733416.XA
Other languages
Chinese (zh)
Inventor
陈凤
孙巧萍
朱凌云
孙知信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHENJIANG OKAYAMA ELECTRONICS CO Ltd
Original Assignee
ZHENJIANG OKAYAMA ELECTRONICS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHENJIANG OKAYAMA ELECTRONICS CO Ltd filed Critical ZHENJIANG OKAYAMA ELECTRONICS CO Ltd
Priority to CN201310733416.XA priority Critical patent/CN103712560A/en
Publication of CN103712560A publication Critical patent/CN103712560A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a part detection method, system and electronic device based on information fusion of multiple sensors. The method comprises the steps that firstly, the multiple sensors are utilized for collecting images of detected parts, and then multiple images are formed correspondingly; secondly, the multiple images are fused to form a final fused image; thirdly, the finally fused image is matched with a template image, and the qualified rate of the parts is judged according to a matching result. By means of the method, the images can be collected through the multiple sensors, the collected images are fused so that complementary advantages of the multiple sensors in the part detection process can be achieved, the characteristics of the parts can be reflected more truly, and the detection accuracy and efficiency of the parts are improved.

Description

The part detection method, system and the device that based on a plurality of sensor informations, merge
Technical field
The present invention relates to piece test field, particularly relate to a kind of part detection method, system and device merging based on a plurality of sensor informations.
Background technology
Development along with sensor technology, image processing techniques, piece test technology based on machine vision realizes its intellectuality, digitizing, miniaturization, networking and multifunction gradually, possess online detection, real-time analysis, the real-time ability of controlling, in fields such as military affairs, industrial detection, medical science, obtain extensive concern and application.The part detection method that machine vision has been applied in production reality at present mainly comprises that outward appearance detects and two aspects of size detection.Yet the single sensor piece test based on machine vision often can not meet this application needs completely, traditional single-sensor pattern is the characteristic information of destination object in Description Image comprehensively.Therefore, piece test systematic research and exploitation based on Multi-sensor Image Fusion, become the development trend of industrial detection gradually.
Summary of the invention
The technical problem to be solved in the present invention is, for the above-mentioned single-sensor pattern of the prior art characteristic information of destination object in Description Image comprehensively, a kind of part detection method, system and device merging based on a plurality of sensor informations is provided, realize a plurality of sensors and detect mutual supplement with each other's advantages, more can react really the characteristic of mechanical component, improve precision and the efficiency of piece test.
For solving the problems of the technologies described above, the invention provides a kind of part detection method merging based on a plurality of sensor informations, it is characterized in that comprising the steps:
Step (1), utilize a plurality of sensors to carry out image acquisition to tested part, form corresponding multiple image;
Step (2), merge described multiple image and form final fused images;
Step (3), described final fused images is mated with template image, according to matching result, judge the qualification rate of described tested part.
Merging described multiple image in described step (2) forms before final fused images and also comprises described multiple image is carried out to the pretreated step of data:
Remove information irrelevant with piece test in described multiple image and/or that relation is less, and remove the noise producing in image acquisition and transmitting procedure.
The method that the described multiple image of the middle fusion of described step (2) forms final fused images comprises:
Extract the unique point of described multiple image, described unique point is classified, and merge described multiple image according to the characteristic of division point extracting.
The method that the described multiple image of the middle fusion of described step (2) forms final fused images comprises:
Based on resolution analysis algorithm, carry out multiple image fusion and/or carry out multiple image fusion based on the bent wave conversion of the second generation.
The method of in described step (3), final fused images being mated with template image further comprises the steps:
S1, a gray-scale value threshold values is set, according to described gray-scale value threshold values, described final fused images is carried out to Region Segmentation;
S2, the region to gray-scale value lower than this gray-scale value threshold values utilize gray matrix to mate and calculate similarity, and gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of this gray-scale value threshold values;
S3, by the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.
In described step S2, to gray-scale value, lower than the region of threshold values, utilize the method that gray matrix mates further to comprise the steps:
S21, gray-scale value is divided into a plurality of zonules lower than the region of gray-scale value threshold values;
S22, at each Zhong Geyi interval, zonule, carry out gray-scale value sampling;
S23, the gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating the similarity with template image.
Described unique point comprises: some feature, edge feature, shape facility and/or provincial characteristics.
Described a plurality of sensor at least comprises infrared light transducer[sensor and visible light sensor.
The present invention also provides a kind of piece test system merging based on a plurality of sensor informations, it is characterized in that comprising:
Image collection module, for utilizing a plurality of sensors to carry out image acquisition to tested part, forms corresponding multiple image;
Image co-registration module, forms final fused images for merging described multiple image;
Matching module: for described final fused images is mated with template image, judge the qualification rate of described tested part according to matching result.
Also comprise image pretreatment module, described pretreatment module is for removing the information that described multiple image is irrelevant with piece test and/or relation is less, and the noise producing in removal image acquisition and transmitting procedure.
Also comprise characteristic extracting module, described characteristic extracting module, for extracting the unique point of described multiple image, is classified described unique point;
Described image co-registration module merges described multiple image according to the characteristic of division point extracting.
Described image co-registration module is carried out multiple image fusion and/or is carried out multiple image fusion based on the bent wave conversion of the second generation based on resolution analysis algorithm.
Described matching module is used for:
One gray-scale value threshold values is set, according to described gray-scale value threshold values, described final fused images is carried out to Region Segmentation;
Region to gray-scale value lower than this gray-scale value threshold values utilizes gray matrix to mate and calculates similarity, and gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of this gray-scale value threshold values;
By the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.
Described matching module also for:
Gray-scale value is divided into a plurality of zonules lower than the region of gray-scale value threshold values;
At each Zhong Geyi interval, zonule, carry out gray-scale value sampling;
The gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating the similarity with template image.
Described unique point comprises: some feature, edge feature, shape facility and/or provincial characteristics.
Described a plurality of sensor at least comprises infrared light transducer[sensor and visible light sensor.
The present invention also provides a kind of part detection device merging based on a plurality of sensor informations, it is characterized in that, comprises the described piece test system merging based on a plurality of sensor informations.
The present invention has following beneficial effect: in the part detection method method merging based on a plurality of sensor informations of the present invention, utilize a plurality of sensors to gather the image of tested part, form corresponding multiple image; Merge described multiple image and form final fused images; Then described final fused images is mated with template image, according to matching result judgement part qualification rate.By the way, the present invention can carry out image acquisition to part by a plurality of sensors, and the multiple image after gathering is merged, to realize a plurality of sensors, carry out the mutual supplement with each other's advantages in piece test process, more can react really part characteristic, improve precision and the efficiency of piece test.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the process flow diagram that the present invention is based on part detection method one embodiment of a plurality of sensor informations fusions;
Fig. 2 is the process flow diagram of image characteristics extraction of the present invention, fusion and coupling;
Fig. 3 is the image interfusion method process flow diagram that the present invention is based on the bent wave conversion of the second generation;
Fig. 4 is the method flow diagram that the final fused images of the present invention is mated with template image;
Fig. 5 method flow diagram that to be the present invention utilize gray matrix to mate gray-scale value lower than the region of gray-scale value threshold values;
Fig. 6 the present invention is based on the piece test system one embodiment structural representation that a plurality of sensor informations merge.
Embodiment
Below in conjunction with drawings and embodiments, the present invention is described in detail.
Please refer to Fig. 1 to Fig. 6, wherein, Fig. 1 is the process flow diagram that the present invention is based on part detection method one embodiment of a plurality of sensor informations fusions, and Fig. 6 the present invention is based on the piece test system one embodiment structural representation that a plurality of sensor informations merge; The piece test system that the present invention is based on a plurality of sensor informations fusions comprises image collection module 601, image co-registration module 604, and matching module 605, further comprises pretreatment module 602 and characteristic extracting module 603.
The part detection method that the present invention is based on a plurality of sensor informations fusions comprises step:
Step (1), utilize a plurality of sensors to gather the image of tested parts, form corresponding multiple image;
Sensor obtains the image of tested part by image collection module 601, wherein sensor is that a plurality of sensors are at least two dissimilar sensors for gathering the sensor of detected part image information.In the present embodiment, will take infrared sensor and visible light sensor is specifically described as example.Wherein infrared sensor is by the infrared radiation of detecting object, produce real-time heat picture, change human eye invisible radiation image into apparent visual image, thereby obtain the information such as some feature, edge feature, shape facility, provincial characteristics of tested part.Visible light sensor is to be reflected and obtained subject image by object spectrum, thereby obtains the information such as some feature, edge feature, shape facility, provincial characteristics of tested part.In this step, utilize infrared sensor and visible light sensor to gather respectively infrared image and the visible images of tested part, form corresponding gray level image.
Step (2), merge this multiple image and form final fused images;
Particularly, in the present embodiment, infrared image and visible images that the step (1) of take is collected are example, and image co-registration module 604 forms final fused images for merging infrared image and visible images.The piece test system that further, should merge based on a plurality of sensor informations also comprises data preprocessing module 602 and characteristic extracting module 603.Wherein image pretreatment module 602 for removing infrared image and visible images before image co-registration with this detection information irrelevant and/or that relation is less, and the noise producing in image acquisition and transmitting procedure is removed, and to retain, in image, this detects the information of being paid close attention to.Further, image pretreatment module 602 is also carried out the operations such as geometric transformation, image smoothing, figure image intensifying to image.Thereafter, as shown in Figure 2, point feature, edge feature, shape facility and provincial characteristics etc. in 603 pairs of images of characteristic extracting module are extracted, and by the above-mentioned characteristic point classification in infrared image be as: feature 1, feature 2 ..., feature n, by the unique point in visible images be equally also divided into feature 1, feature 2 ..., feature n.
Image co-registration module 604 merges the infrared inductor unique point identical with visible images by comparing different unique points, as the feature 1 in infrared image and the feature 1 in visible images merge, feature 2 in infrared image merges with the feature 2 in visible images, thereby the final fused images after form merging, this final fused images comprise feature 1 after fusion, feature 2 ..., feature n.
Further, this image interfusion method can also comprise and utilizes multiresolution analysis algorithm (MRA) to carry out image co-registration, concrete, based on multiresolution analysis image interfusion method for first infrared image and visible images being carried out to resolution decomposition, with the coefficient obtaining after each picture breakdown, represent, then two coefficients are represented to by certain fusion rule, carrying out fusion treatment obtains a coefficient after fusion and represent, the most laggard final fused images of crossing image inversionization acquisition reconstruct, makes the image after merging have more complementarity and intelligibility.
As shown in Figure 3, this image interfusion method can also comprise that the image interfusion method based on the bent wave conversion of the second generation (SecondGeneration of Curvelet Transform, SGCT) carries out fusion treatment to image.Concrete, by two width image march wave conversions being obtained to the bent wave conversion coefficient of different frequency bands scope, thereby obtain respectively low frequency component and the high fdrequency component of infrared image and visible images, low frequency component is weighted to the average low frequency component after merging that then forms, each high fdrequency component is mated to the high fdrequency component forming after merging according to Region Matching rule, then low frequency component and the inverse transformation of high fdrequency component march ripple after this being merged, carry out Image Reconstruction, obtain final fused images.
Step (3), final fused images is mated with template image, according to matching result judgement part qualification rate.
The method of as shown in Figure 4, final fused images being mated with template image comprises:
S1, a gray-scale value threshold values is set, according to described gray-scale value threshold values, described final fused images is carried out to Region Segmentation;
S2, the region to gray-scale value lower than gray-scale value threshold values utilize gray matrix to mate and calculate similarity, and gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of gray-scale value threshold values;
S3, by the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.
Concrete, images match module 605 receives the final fused images having merged and final fused images is carried out to image to be cut apart, and this images match module 605 is partitioned into final fused images lower than the region of gray-scale value threshold values and higher than the region of gray-scale value threshold values by the predefined gray-scale value threshold values of system.For the region lower than gray-scale value threshold values, images match module 605 utilizes gray matrix form to mate.
As shown in Figure 5, to gray-scale value, lower than the region of threshold values, utilize the method that gray matrix mates to comprise:
S21, gray-scale value is divided into a plurality of zonules lower than the region of gray-scale value threshold values;
S22, at each Zhong Geyi interval, zonule, carry out gray-scale value sampling;
S23, the gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating the similarity with template image.
Concrete mode is, using entire image as a 2 dimensional region, this 2 dimensional region is divided into a plurality of zonules, then in each zonule, every certain intervals, carry out gray-scale value sampling, the gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating it and the similarity of template image.Region for gray-scale value higher than gray-scale value threshold values, images match module 605 utilizes eigenwert to mate, set up eigenwert matching relationship between final fused images and template image, normally used eigenwert has a feature, edge feature, shape facility and provincial characteristics etc.Finally, by the similarity by gray matrix gained with by the additional different weights of similarity of feature extraction gained, calculate the similarity of last image, output detections result.
In the present embodiment, infrared and visible light sensor is because its Imaging physics characteristic exists advantage and deficiency separately in imaging process.The feature of infrared image is that spatial resolution is low, and mixing phenomenon is comparatively serious, easily lose the detailed information of high frequency, and visible images contains abundant spectral information, and detail of the high frequency is abundant, but low-frequency information is lower.Utilize respectively infrared sensor and visible light sensor to carry out image acquisition to mechanical component, and the image collecting is merged, made up the shortcoming of single-sensor, realize and having complementary advantages.Further, by the coupling that combines based on two kinds of methods of Feature Points Matching and gray-scale value coupling, improved precision and the efficiency of images match, made piece test convenient, quick, accurate.
In present embodiment, only take infrared sensor and visible light sensor describes as example, but understandable, in the present invention, a plurality of sensors of indication are at least two dissimilar sensors, and are not limited only to infrared sensor and visible light sensor, and therefore not to repeat here.
Consult Fig. 6, the present invention is based in an embodiment of the piece test system that a plurality of sensor informations merge, the piece test system merging based on a plurality of sensor informations comprises image collection module 601, image co-registration module 604, matching module 605, further comprises pretreatment module 602 and characteristic extracting module 603.
Wherein image collection module 601, for gathering the image of tested part, forms corresponding multiple image; Image co-registration module 604 forms final fused images for merging the pretreated multiple image of described data according to the characteristic of division extracting; Matching module 605 is for described final fused images is mated with template image, according to matching result judgement part qualification rate.
In one embodiment of the invention, this system also comprises image pretreatment module 602 and characteristic extracting module 603, wherein image pretreatment module 602 for removing infrared image and visible images before image co-registration with this detection information irrelevant and/or that relation is less, and the noise producing in image acquisition and transmitting procedure is removed, and to retain, in image, this detects the information of being paid close attention to.Further, image pretreatment module 602 is also carried out the operations such as geometric transformation, image smoothing, figure image intensifying to image.
Point feature, edge feature, shape facility and provincial characteristics etc. in 603 pairs of images of characteristic extracting module are extracted, and by the above-mentioned characteristic point classification in infrared image be as: feature 1, feature 2 ..., feature n, by the unique point in visible images be equally also divided into feature 1, feature 2 ..., feature n.
Image co-registration module 604 merges the infrared inductor unique point identical with visible images by comparing different unique points, as the feature 1 in infrared image and the feature 1 in visible images merge, feature 2 in infrared image merges with the feature 2 in visible images, thereby the final fused images after form merging, this final fused images comprise feature 1 after fusion, feature 2 ..., feature n.
Further, this image interfusion method can also comprise and utilizes multiresolution analysis algorithm (MRA) to carry out image co-registration, concrete, based on multiresolution analysis image interfusion method for first infrared image and visible images being carried out to resolution decomposition, with the coefficient obtaining after each picture breakdown, represent, then two coefficients are represented to by certain fusion rule, carrying out fusion treatment obtains a coefficient after fusion and represent, the most laggard fused images of crossing image inversionization acquisition reconstruct, makes the image after merging have more complementarity and intelligibility.
As shown in Figure 3, this image interfusion method can also comprise that the image interfusion method based on the bent wave conversion of the second generation (SecondGeneration of Curvelet Transform, SGCT) carries out fusion treatment to image.Concrete, by two width image march wave conversions being obtained to the bent wave conversion coefficient of different frequency bands scope, thereby obtain respectively low frequency component and the high fdrequency component of infrared image and visible images, low frequency component is weighted to the average low frequency component after merging that then forms, each high fdrequency component is mated to the high fdrequency component forming after merging according to Region Matching rule, then low frequency component and the inverse transformation of high fdrequency component march ripple after this being merged, carry out Image Reconstruction, obtain final fused images.
Matching module 605 receives the final fused images having merged and also final fused images is carried out to Region Segmentation, and this matching module 605 is partitioned into final fused images lower than the region of gray-scale value threshold values and higher than the region of gray-scale value threshold values by the predefined gray-scale value threshold values of system.Region to gray-scale value lower than gray-scale value threshold values utilizes gray matrix to mate and calculates similarity, gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of gray-scale value threshold values, by the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.Concrete, to gray-scale value, lower than the region of gray-scale value threshold values, utilize the concrete grammar that gray matrix mates to be, using entire image as a 2 dimensional region, this 2 dimensional region is divided into a plurality of zonules, then in each zonule, every certain intervals, carry out gray-scale value sampling, the gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating it and the similarity of template image.Region for gray-scale value higher than gray-scale value threshold values, images match module 605 utilizes eigenwert to mate, set up eigenwert matching relationship between final fused images and template image, normally used eigenwert has a feature, edge feature, shape facility and provincial characteristics etc.Finally, by the similarity by gray matrix gained with by the additional different weights of similarity of feature extraction gained, calculate the similarity of last image, output detections result.
The present invention also provides an embodiment of electronic installation, and wherein, electronic installation comprises the piece test system merging based on a plurality of sensor informations of the respective embodiments described above.
The foregoing is only embodiments of the present invention; not thereby limit the scope of the claims of the present invention; every equivalent structure or conversion of equivalent flow process that utilizes instructions of the present invention and accompanying drawing content to do; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (17)

1. the part detection method merging based on a plurality of sensor informations, is characterized in that comprising the steps:
Step (1), utilize a plurality of sensors to carry out image acquisition to tested part, form corresponding multiple image;
Step (2), merge described multiple image and form final fused images;
Step (3), described final fused images is mated with template image, according to matching result, judge the qualification rate of described tested part.
2. the part detection method merging based on a plurality of sensor informations according to claim 1, it is characterized in that, merge described multiple image in described step (2) and form before final fused images and also comprise described multiple image is carried out to the pretreated step of data:
Remove information irrelevant with piece test in described multiple image and/or that relation is less, and remove the noise producing in image acquisition and transmitting procedure.
3. the part detection method merging based on a plurality of sensor informations according to claim 1, is characterized in that, the method that the described multiple image of the middle fusion of described step (2) forms final fused images comprises:
Extract the unique point of described multiple image, described unique point is classified, and merge described multiple image according to the characteristic of division point extracting.
4. the part detection method merging based on a plurality of sensor informations according to claim 1, is characterized in that, the method that the described multiple image of the middle fusion of described step (2) forms final fused images comprises:
Based on resolution analysis algorithm, carry out multiple image fusion and/or carry out multiple image fusion based on the bent wave conversion of the second generation.
5. the part detection method merging based on a plurality of sensor informations according to claim 1, is characterized in that, the method for in described step (3), final fused images being mated with template image further comprises the steps:
S1, a gray-scale value threshold values is set, according to described gray-scale value threshold values, described final fused images is carried out to Region Segmentation;
S2, the region to gray-scale value lower than this gray-scale value threshold values utilize gray matrix to mate and calculate similarity, and gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of this gray-scale value threshold values;
S3, by the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.
6. the part detection method merging based on a plurality of sensor informations according to claim 5, is characterized in that, in described step S2, to gray-scale value, lower than the region of threshold values, utilizes the method that gray matrix mates further to comprise the steps:
S21, gray-scale value is divided into a plurality of zonules lower than the region of gray-scale value threshold values;
S22, at each Zhong Geyi interval, zonule, carry out gray-scale value sampling;
S23, the gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating the similarity with template image.
7. the part detection method merging based on a plurality of sensor informations according to claim 3, is characterized in that, described unique point comprises: some feature, edge feature, shape facility and/or provincial characteristics.
8. according to according to the part detection method merging based on a plurality of sensor informations described in any one in claim 1-7, it is characterized in that, described a plurality of sensors at least comprise infrared light transducer[sensor and visible light sensor.
9. the piece test system merging based on a plurality of sensor informations, is characterized in that comprising:
Image collection module, for utilizing a plurality of sensors to carry out image acquisition to tested part, forms corresponding multiple image;
Image co-registration module, forms final fused images for merging described multiple image;
Matching module: for described final fused images is mated with template image, judge the qualification rate of described tested part according to matching result.
10. the piece test system merging based on a plurality of sensor informations according to claim 9, characterized by further comprising image pretreatment module, described pretreatment module is for removing the information that described multiple image is irrelevant with piece test and/or relation is less, and the noise producing in removal image acquisition and transmitting procedure.
The 11. piece test systems that merge based on a plurality of sensor informations according to claim 9, characterized by further comprising characteristic extracting module, and described characteristic extracting module, for extracting the unique point of described multiple image, is classified described unique point;
Described image co-registration module merges described multiple image according to the characteristic of division point extracting.
The 12. piece test systems that merge based on a plurality of sensor informations according to claim 9, it is characterized in that, described image co-registration module is carried out multiple image fusion and/or is carried out multiple image fusion based on the bent wave conversion of the second generation based on resolution analysis algorithm.
The 13. piece test systems that merge based on a plurality of sensor informations according to claim 9, is characterized in that, described matching module is used for:
One gray-scale value threshold values is set, according to described gray-scale value threshold values, described final fused images is carried out to Region Segmentation;
Region to gray-scale value lower than this gray-scale value threshold values utilizes gray matrix to mate and calculates similarity, and gray-scale value is utilized and extracts eigenwert and mate and calculate similarity higher than the region of this gray-scale value threshold values;
By the similarity by gray matrix gained with by the similarity of feature extraction gained, be weighted and draw final similarity, and output detections result.
The 14. piece test systems that merge based on a plurality of sensor informations according to claim 13, is characterized in that, described matching module also for:
Gray-scale value is divided into a plurality of zonules lower than the region of gray-scale value threshold values;
At each Zhong Geyi interval, zonule, carry out gray-scale value sampling;
The gray-scale value sampling is expressed as to matrix form, by asking for the variance of matrix element and calculating the similarity with template image.
The 15. piece test systems that merge based on a plurality of sensor informations according to claim 11, is characterized in that, described unique point comprises: some feature, edge feature, shape facility and/or provincial characteristics.
16. according to the piece test system merging based on a plurality of sensor informations described in any one in claim 9-15, it is characterized in that, described a plurality of sensors at least comprise infrared light transducer[sensor and visible light sensor.
17. 1 kinds of part detection devices that merge based on a plurality of sensor informations, is characterized in that, comprise the piece test system merging based on a plurality of sensor informations as described in claim 9-16 any one.
CN201310733416.XA 2013-12-27 2013-12-27 Part detection method, system and device based on information fusion of multiple sensors Pending CN103712560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310733416.XA CN103712560A (en) 2013-12-27 2013-12-27 Part detection method, system and device based on information fusion of multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310733416.XA CN103712560A (en) 2013-12-27 2013-12-27 Part detection method, system and device based on information fusion of multiple sensors

Publications (1)

Publication Number Publication Date
CN103712560A true CN103712560A (en) 2014-04-09

Family

ID=50405723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310733416.XA Pending CN103712560A (en) 2013-12-27 2013-12-27 Part detection method, system and device based on information fusion of multiple sensors

Country Status (1)

Country Link
CN (1) CN103712560A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
CN106327483A (en) * 2016-08-12 2017-01-11 广州视源电子科技股份有限公司 Detection equipment logo attaching method, system and device
CN107339937A (en) * 2017-07-10 2017-11-10 大连理工大学 A kind of mechanism kinematic parameter test device of Multi-sensor Fusion
CN107578433A (en) * 2017-08-17 2018-01-12 中南大学 A kind of method for identifying electrolytic bath electrode plate temperature
CN108846821A (en) * 2018-04-27 2018-11-20 电子科技大学 A kind of cell neural network thermal region fusion method based on wavelet transformation
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN111627022A (en) * 2020-07-20 2020-09-04 宁波格劳博机器人有限公司 Naked electric core off-line AC Overhang measuring machine of lamination
CN113129304A (en) * 2021-05-18 2021-07-16 郑州轻工业大学 Part detection method based on machine vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6889171B2 (en) * 2002-03-21 2005-05-03 Ford Global Technologies, Llc Sensor fusion system architecture
CN1975759A (en) * 2006-12-15 2007-06-06 中山大学 Human face identifying method based on structural principal element analysis
CN101110101A (en) * 2006-07-17 2008-01-23 松下电器产业株式会社 Method for recognizing picture pattern and equipment thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6889171B2 (en) * 2002-03-21 2005-05-03 Ford Global Technologies, Llc Sensor fusion system architecture
CN101110101A (en) * 2006-07-17 2008-01-23 松下电器产业株式会社 Method for recognizing picture pattern and equipment thereof
CN1975759A (en) * 2006-12-15 2007-06-06 中山大学 Human face identifying method based on structural principal element analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙明超: "可见光与红外侦察图像融合技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204528A (en) * 2016-06-27 2016-12-07 重庆理工大学 A kind of size detecting method of part geometry quality
CN106327483A (en) * 2016-08-12 2017-01-11 广州视源电子科技股份有限公司 Detection equipment logo attaching method, system and device
CN107339937A (en) * 2017-07-10 2017-11-10 大连理工大学 A kind of mechanism kinematic parameter test device of Multi-sensor Fusion
CN107578433A (en) * 2017-08-17 2018-01-12 中南大学 A kind of method for identifying electrolytic bath electrode plate temperature
CN107578433B (en) * 2017-08-17 2020-04-21 中南大学 Method for identifying temperature of electrode plate of electrolytic cell
CN108846821A (en) * 2018-04-27 2018-11-20 电子科技大学 A kind of cell neural network thermal region fusion method based on wavelet transformation
CN108846821B (en) * 2018-04-27 2022-01-11 电子科技大学 Wavelet transform-based cellular neural network hot region fusion method
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN110930311B (en) * 2018-09-19 2023-04-25 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN111627022A (en) * 2020-07-20 2020-09-04 宁波格劳博机器人有限公司 Naked electric core off-line AC Overhang measuring machine of lamination
CN113129304A (en) * 2021-05-18 2021-07-16 郑州轻工业大学 Part detection method based on machine vision

Similar Documents

Publication Publication Date Title
CN103712560A (en) Part detection method, system and device based on information fusion of multiple sensors
Frei et al. Intrinsic time-scale decomposition: time–frequency–energy analysis and real-time filtering of non-stationary signals
Yang et al. Defect detection in magnetic tile images based on stationary wavelet transform
CN102254314B (en) Visible-light/infrared image fusion method based on compressed sensing
CN103295201B (en) A kind of Multisensor Image Fusion Scheme based on NSST territory IICM
CN102306381B (en) Method for fusing images based on beamlet and wavelet transform
CN103440035A (en) Gesture recognition system in three-dimensional space and recognition method thereof
CN103279935A (en) Method and system of thermal infrared remote sensing image super-resolution reconstruction based on MAP algorithm
CN111339948B (en) Automatic identification method for newly-added buildings of high-resolution remote sensing images
CN105225216A (en) Based on the Iris preprocessing algorithm of space apart from circle mark rim detection
CN102901444A (en) Method for detecting component size based on matching pursuit (MP) wavelet filtering and detecting system thereof
CN104392444A (en) Method of extracting characteristics of medical MR (magnetic resonance) images based on ensemble empirical mode decomposition
CN109766838A (en) A kind of gait cycle detecting method based on convolutional neural networks
CN106508048B (en) A kind of similar scale image interfusion method based on multiple dimensioned primitive form
CN102521795B (en) Cross matching fingerprint image scaling method based on global ridge distance
Li et al. Study on significance enhancement algorithm of abnormal features of urban road ground penetrating radar images
CN102542278B (en) Adaptive characteristic point extraction and image matching based on discrete wavelet transformation (DWT)
CN103679726A (en) Method for improving imaging quality of rock debris image
CN103268476B (en) A kind of Remote Sensing Target monitoring method
Kang et al. Image registration based on harris corner and mutual information
Ngau et al. Bottom-up visual saliency map using wavelet transform domain
CN110473145B (en) Detection method and device for passive terahertz image fusion ultrahigh resolution reconstruction
Zhang et al. Adaptive Harris corner detection algorithm based on B-spline function
Qingqing et al. Improved fusion method for infrared and visible remote sensing imagery using NSCT
CN104299223A (en) Different-source image matching method based on Gabor coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140409

WD01 Invention patent application deemed withdrawn after publication