CN110781720A - Object identification method based on image processing and multi-sensor fusion - Google Patents
Object identification method based on image processing and multi-sensor fusion Download PDFInfo
- Publication number
- CN110781720A CN110781720A CN201910837809.2A CN201910837809A CN110781720A CN 110781720 A CN110781720 A CN 110781720A CN 201910837809 A CN201910837809 A CN 201910837809A CN 110781720 A CN110781720 A CN 110781720A
- Authority
- CN
- China
- Prior art keywords
- image
- laser
- coordinate
- camera
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 title claims abstract description 22
- 230000004927 fusion Effects 0.000 title claims abstract description 18
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 238000013135 deep learning Methods 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims abstract description 7
- 238000005516 engineering process Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 abstract description 6
- 238000006243 chemical reaction Methods 0.000 description 7
- 239000010410 layer Substances 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an object identification method based on image processing and multi-sensor fusion, which comprises the following steps: s1: acquiring a three-channel RGB color image and a channel multi-line laser ranging image; s2: projecting the optical coordinate of a camera of the RGB color image into a laser point cloud coordinate, and projecting the laser point cloud coordinate into a 360-degree annular panoramic coordinate; s3: and selecting a target detection frame aiming at a target trained in advance by utilizing an image recognition technology of deep learning to obtain a target detection boundary frame distribution image and an object class distribution image. The method is simple and high in real-time performance, the mapping fusion of the six-channel images is realized based on the multi-sensor, the object class distribution images and the target detection boundary box distribution images of two channels from target detection are added on the basis of the traditional RGBD four-channel images, and an accurate image processing basis is provided for realizing the rapid and accurate target object positioning.
Description
Technical Field
The invention relates to the field of image processing, in particular to an object identification method based on multi-sensor fusion of image processing.
Background
The transformer substation is an important link for stable operation of a power system, important transformer equipment is distributed in the transformer substation, although the existing transformer substation is generally a closed area, and external personnel cannot enter the transformer substation, but for workers in the transformer substation, the behaviors of the workers are difficult to be effectively controlled, especially for some areas in the transformer substation, the existing camera monitoring is difficult to accurately monitor, and the camera monitoring has a blind area, more importantly, the transformer equipment has multiple types and unfixed structures, compared with a circuit, the maintenance operation environment is more complex, the maintenance operation process is more complicated, if the positions of the workers are not monitored, the workers are easy to mistakenly enter a live interval (mistakenly touch live equipment) due to negligence or other reasons, so that safety accidents are easy to be caused, and the workers entering the transformer substation are required to be accurately identified, the safety and stability of the transformer substation are ensured, and the stable operation of the whole power grid system is further ensured.
At present, technologies such as camera shooting and laser radar positioning are mostly adopted for positioning operation and maintenance personnel of a transformer substation, personnel identification is carried out on shot images through manual detection, time and labor are wasted, or personnel positioning is carried out through an image processing method, but the problems that a camera and a laser radar are both spherical lenses, coordinate scale lines of the camera and the laser radar are completely different, and barrel-shaped distortion directions are different are solved. In conventional image processing, it is common to convert a barrel-shaped distorted image into an undistorted image. However, the processing method is complex in calculation and poor in real-time performance, and meanwhile, the workload of personnel positioning and tracking and target distance detection of a single layer on the whole image is huge, the positioning effect is poor, and the distance measuring and calculating accuracy is low.
Therefore, it is desirable to provide a novel object positioning method based on image processing to solve the above problems.
Disclosure of Invention
The invention aims to solve the technical problem of providing an object identification method based on image processing and multi-sensor fusion, which can quickly and accurately realize object identification.
In order to solve the technical problems, the invention adopts a technical scheme that: the object recognition method based on the multi-sensor fusion of the image processing comprises the following steps:
s1: acquiring a three-channel RGB color image and a channel multi-line laser ranging image;
s2: projecting the optical coordinate of a camera of the RGB color image into a laser point cloud coordinate, and projecting the laser point cloud coordinate into a 360-degree annular panoramic coordinate;
s3: and selecting a target detection frame aiming at a target trained in advance by utilizing an image recognition technology of deep learning to obtain a target detection boundary frame distribution image and an object class distribution image.
In a preferred embodiment of the present invention, in step S1, the three-channel RGB color image is obtained from an original image of a camera, and the one-channel multiline laser ranging image is obtained by obtaining laser point cloud information and then generating an independent image layer.
In a preferred embodiment of the present invention, in step S2, the step of projecting the camera coordinates into laser point cloud coordinates includes:
s201: creating a 3D temporary map, wherein the map coordinate is a laser coordinate, and the size of the map is the width and height of the map of a single camera after the map is converted into a laser point cloud coordinate;
s202: calculating the laser coordinate of the next pixel of the map;
s203: judging whether the next pixel is the end pixel of the map, if not, repeating the step S202, and if so, performing the next step;
s204: and combining the eight camera maps, and splicing to generate a 360-degree panorama under the laser coordinate.
Further, the specific calculation process of step S202 includes:
firstly, converting laser coordinates into camera lens coordinates, then converting the lens coordinates into camera pixel coordinates, and finally reading corresponding camera pixels into a map.
In a preferred embodiment of the present invention, in step S2, the step of projecting the laser point cloud coordinates into annular panoramic coordinates includes:
s211: creating a laser dot matrix layer with the size of 1920 x 1080, the left edge angle and the right edge angle of 0-360 degrees, the upper angle and the lower angle of-15 degrees, and the left edge angle, the right edge angle and the upper angle and the lower angle are uniformly spread and stretched;
s212: reading a column of laser dot matrix storage area data;
s213: calculating the pixel angle of the printed image;
s214: calculating pixel positions, and assigning corresponding data to a printed layer;
s215: and judging whether the currently read laser dot matrix storage area data is the end of the data, if not, repeating the steps S212-S214, and if so, ending the image generation.
In a preferred embodiment of the present invention, the previously trained objects include target personnel, work clothes, and helmets in step S3.
The invention has the beneficial effects that:
(1) the method realizes the mapping fusion of six-channel images based on multiple sensors, increases the object class distribution image and the target detection boundary frame distribution image of two channels from target detection on the basis of the traditional RGBD four-channel image, and provides an accurate image processing basis for realizing the rapid and accurate target object positioning;
(2) in the conversion process from the optical coordinate of the single lens to the laser coordinate and then to the 360-degree annular panoramic coordinate, the image of the curved surface is directly projected onto the laser coordinate of the curved surface by designing the conversion mode from the curved surface to the curved surface, the conversion from a distorted image to an undistorted image is not needed, and the real-time performance is higher;
(3) different from the traditional target segmentation and tracking method of the laser radar, the method utilizes deep learning to segment and track the target, then the laser point is projected into a detection target frame of the deep learning, not only positioning information can be obtained, but also the category information of an object can be obtained, the practicability is good, in most occasions, the laser point cloud is sparse, and the image is dense, so the cost for point cloud acquisition is very high, the cost for image acquisition is relatively low, the dependence on the laser point cloud can be greatly reduced when the target segmentation and tracking are carried out in the image processing, and the cost is reduced.
Drawings
FIG. 1 is a flow chart of an object recognition method based on image processing for multi-sensor fusion in accordance with the present invention;
FIG. 2 is a schematic view of a multi-layer projection;
fig. 3 is a schematic flow chart of projecting the camera coordinates into laser point cloud coordinates in step S2;
fig. 4 is a schematic flowchart of the process of projecting the laser point cloud coordinates into annular panoramic coordinates in step S2.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, an embodiment of the present invention includes:
an object identification method based on image processing and multi-sensor fusion is mainly applied to a transformer substation safety supervision inspection robot, and the individual positioning of personnel on site is completed through target detection of deep learning and fusion of real-time laser point cloud, and the method comprises the following steps:
s1: acquiring a three-channel RGB color image and a channel multi-line laser ranging image;
the three-channel RGB color image is obtained through an original image of a camera, the one-channel multi-line laser ranging image is obtained through obtaining laser point cloud information and then generating an independent image layer, and the laser point cloud information is a depth image of 16-line 360-degree laser and is directly read by a radar. The definition and data encoding of each image is:
(1) color image R: the red channel of the original image of the camera is 0-255 in gray;
(2) color image G: the green channel of the original image of the camera is 0-255 gray;
(3) color image B: the blue channel of the original image of the camera has the gray scale of 0-255;
(4) multiline laser ranging image: in the embodiment, 16-line laser ranging gray images are adopted, wherein gray levels are 1-255, pure black 0 represents no meaning, and 1-255 represents distance.
S2: projecting the optical coordinate of a camera of the RGB color image into a laser point cloud coordinate, and projecting the laser point cloud coordinate into a 360-degree annular panoramic coordinate; with reference to fig. 2, in this step, by designing a conversion mode of the curved surface to the curved surface, the image of the curved surface is directly projected onto the laser coordinate of the curved surface, without conversion from a distorted image to an undistorted image, the real-time performance is higher, and all conversion formulas are divided into two steps:
(1) the projection from the camera optical coordinate of the RGB color image to the 16-line laser point cloud coordinate specifically comprises the following steps in combination with FIG. 3:
s201: creating a 3D temporary map, wherein the map coordinate is a laser coordinate, and the size of the map is the width and height of the map (which is converted into a color image under the laser coordinate) of a single camera after being converted into a laser point cloud coordinate;
s202: calculating the laser coordinate of the next pixel of the map, including converting the laser coordinate into the camera lens coordinate, actually calibrating the pixel position (physical position) in the lens to the laser coordinate, and adopting each pixel of the laser coordinate to 'read' the data of the corresponding position of the lens, so that the display effect is more uniform; converting the lens coordinates into camera pixel coordinates, and finally reading corresponding camera pixels into a map;
s203: judging whether the next pixel is the end pixel of the map, if not, repeating the step S202, and if so, performing the next step;
s204: and combining the eight camera maps, and splicing to generate a 360-degree panorama under the laser coordinate.
The conversion formula adopted in this step is:
the abscissa (longitude radian) of the 16-line laser image is known as longitude, and the ordinate (latitude radian) is known as latitude; the height of the pixel of the output image is high, and the width of the pixel is width; the maximum visual angle (radian) of the camera is viewMax;
setting a rotation radian coordinate of the optical lens as alpha and a radian coordinate deviating from a central point as beta;
setting the abscissa of a pixel of an original image of a camera as imageX and the ordinate as imageY;
calculating alpha:
a first image limit:
alpha=arctan(tan(latitude)/sin(longitude));
second, third image limit:
alpha ═ arctan (tan (latitude)/sin (longtude)) + pi; wherein pi is 3.1415926;
the fourth image limit:
alpha ═ arctan (tan (latitude)/sin (longtude)) +2 × pi; wherein
Alpha is 0.5 pi at 90 degrees;
alpha is 1.5 pi at 270 degrees;
the origin is alpha-0;
solving beta:
beta=arccos(cos(longitude)*cos(latitude));
and (4) evaluating imageX:
imageX=width/2+cos(alpha)*sqrt((high/2)^2+(width/2)^2)*beta/(viewMax/2);
and (5) evaluating imageY:
imageY=high/2-sin(alpha)*sqrt((high/2)^2+(width/2)^2)*beta/(viewMax/2);
(2) the projection of the 16-line laser point cloud coordinate to the 360-degree annular panoramic coordinate specifically comprises the following steps in combination with fig. 4:
s211: creating a laser dot matrix layer with the size of 1920 x 1080, the left edge angle and the right edge angle of 0-360 degrees, the upper angle and the lower angle of-15 degrees, and the left edge angle, the right edge angle and the upper angle and the lower angle are uniformly spread and stretched;
s212: reading a column of laser dot matrix storage area data;
s213: calculating the pixel angle of the printed image, wherein the calculation process comprises the following steps:
knowing the serial number No of the current laser point, the angle difference angleMax between the top edge (line 1) and the bottom edge (line 16), the number of lines being number; let the latitude (angle) of the current laser spot be latitude
latitude=angleMax/2-(No-1)*angleMax/(number-1);
Longitude (angle) longitude direct reading;
s214: calculating pixel positions, assigning corresponding data to the printed layers, and performing the following calculation process:
the known image origin (0,0) is located at the upper left corner, and the laser origin (0,0) is located at the center; the latitude of the laser is latitude, and the longitude of the laser is longitude; the angle difference latitudmemax of the upper edge and the lower edge of the laser mapping and the angle difference longtudmemax of the left edge and the right edge of the laser mapping; the maximum value of the height of the mapping pixel is high, and the maximum value of the width is width;
setting the horizontal coordinate of a current dot printing pixel as x and the vertical coordinate as y;
solving the following steps of x, y:
x=width*longitude/longitudeMax+width/2;
y=high/2-high*latitude/latitudeMax;
s215: and judging whether the currently read laser dot matrix storage area data is the end of the data, if not, repeating the steps S212-S214, and if so, ending the image generation.
S3: selecting a target detection frame aiming at a target trained in advance by utilizing an image recognition technology of deep learning to obtain a target detection boundary frame distribution image and an object class distribution image;
wherein the pre-trained targets comprise target personnel, work clothes and safety helmets. Different from the traditional target segmentation and tracking method of the laser radar, the method utilizes deep learning to segment and track the target, then the laser point is projected into a detection target frame of the deep learning, not only positioning information can be obtained, but also the category information of an object can be obtained, the practicability is good, in most occasions, the laser point cloud is sparse, and the image is dense, so the cost for point cloud acquisition is very high, the cost for image acquisition is relatively low, the dependence on the laser point cloud can be greatly reduced when the target segmentation and tracking are carried out in the image processing, and the cost is reduced.
The definition and data coding of the target detection bounding box distribution image and the object type distribution image are as follows:
target detection bounding box distribution image: 255 denotes boundary pixels, 0 denotes non-edge pixels;
object class distribution image: the bounding box fills in categories 1-255 and the bounding box outer category 0.
The method realizes the mapping fusion of six-channel images based on multiple sensors, increases the object class distribution images and the target detection boundary frame distribution images of two channels from target detection on the basis of the traditional RGBD four-channel images, and provides an accurate image processing basis for realizing the rapid and accurate target object positioning.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (6)
1. An object identification method based on multi-sensor fusion of image processing comprises the following steps:
s1: acquiring a three-channel RGB color image and a channel multi-line laser ranging image;
s2: projecting the optical coordinate of a camera of the RGB color image into a laser point cloud coordinate, and projecting the laser point cloud coordinate into a 360-degree annular panoramic coordinate;
s3: and selecting a target detection frame aiming at a target trained in advance by utilizing an image recognition technology of deep learning to obtain a target detection boundary frame distribution image and an object class distribution image.
2. The object recognition method based on image processing and multi-sensor fusion of claim 1, wherein in step S1, the three-channel RGB color image is obtained from a camera original image, and the one-channel multi-line laser ranging image is obtained by generating an independent image layer after obtaining laser point cloud information.
3. The object recognition method based on image processing and multi-sensor fusion of claim 1, wherein the step of projecting the camera coordinates into laser point cloud coordinates in step S2 comprises:
s201: creating a 3D temporary map, wherein the map coordinate is a laser coordinate, and the size of the map is the width and height of the map of a single camera after the map is converted into a laser point cloud coordinate;
s202: calculating the laser coordinate of the next pixel of the map;
s203: judging whether the next pixel is the end pixel of the map, if not, repeating the step S202, and if so, performing the next step;
s204: and combining the eight camera maps, and splicing to generate a 360-degree panorama under the laser coordinate.
4. The method for object recognition based on image processing and multi-sensor fusion of claim 3, wherein the specific calculation process of step S202 includes:
firstly, converting laser coordinates into camera lens coordinates, then converting the lens coordinates into camera pixel coordinates, and finally reading corresponding camera pixels into a map.
5. The object recognition method based on image processing and multi-sensor fusion of claim 1, wherein in step S2, the step of projecting the laser point cloud coordinates into annular panoramic coordinates comprises:
s211: creating a laser dot matrix layer with the size of 1920 x 1080, the left edge angle and the right edge angle of 0-360 degrees, the upper angle and the lower angle of-15 degrees, and the left edge angle, the right edge angle and the upper angle and the lower angle are uniformly spread and stretched;
s212: reading a column of laser dot matrix storage area data;
s213: calculating the pixel angle of the printed image;
s214: calculating pixel positions, and assigning corresponding data to a printed layer;
s215: and judging whether the currently read laser dot matrix storage area data is the end of the data, if not, repeating the steps S212-S214, and if so, ending the image generation.
6. The object recognition method based on image processing multi-sensor fusion of claim 1, wherein in step S3, the pre-trained objects include target personnel, work clothes, and safety helmets.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910837809.2A CN110781720B (en) | 2019-09-05 | 2019-09-05 | Object identification method based on image processing and multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910837809.2A CN110781720B (en) | 2019-09-05 | 2019-09-05 | Object identification method based on image processing and multi-sensor fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110781720A true CN110781720A (en) | 2020-02-11 |
CN110781720B CN110781720B (en) | 2022-08-19 |
Family
ID=69384043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910837809.2A Active CN110781720B (en) | 2019-09-05 | 2019-09-05 | Object identification method based on image processing and multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110781720B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106647758A (en) * | 2016-12-27 | 2017-05-10 | 深圳市盛世智能装备有限公司 | Target object detection method and device and automatic guiding vehicle following method |
CN107167811A (en) * | 2017-04-26 | 2017-09-15 | 西安交通大学 | The road drivable region detection method merged based on monocular vision with laser radar |
CN108509918A (en) * | 2018-04-03 | 2018-09-07 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN109829386A (en) * | 2019-01-04 | 2019-05-31 | 清华大学 | Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method |
-
2019
- 2019-09-05 CN CN201910837809.2A patent/CN110781720B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106647758A (en) * | 2016-12-27 | 2017-05-10 | 深圳市盛世智能装备有限公司 | Target object detection method and device and automatic guiding vehicle following method |
CN107167811A (en) * | 2017-04-26 | 2017-09-15 | 西安交通大学 | The road drivable region detection method merged based on monocular vision with laser radar |
CN108509918A (en) * | 2018-04-03 | 2018-09-07 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN109829386A (en) * | 2019-01-04 | 2019-05-31 | 清华大学 | Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method |
Also Published As
Publication number | Publication date |
---|---|
CN110781720B (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110766170B (en) | Image processing-based multi-sensor fusion and personnel positioning method | |
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN103425967B (en) | A kind of based on stream of people's monitoring method of pedestrian detection and tracking | |
CN111369630A (en) | Method for calibrating multi-line laser radar and camera | |
CN109211207B (en) | Screw identification and positioning device based on machine vision | |
CN102917171B (en) | Based on the small target auto-orientation method of pixel | |
CN109859269B (en) | Shore-based video auxiliary positioning unmanned aerial vehicle large-range flow field measuring method and device | |
CN110146030A (en) | Side slope surface DEFORMATION MONITORING SYSTEM and method based on gridiron pattern notation | |
CN108362205B (en) | Space distance measuring method based on fringe projection | |
CN113469178B (en) | Power meter identification method based on deep learning | |
CN112305557B (en) | Panoramic camera and multi-line laser radar external parameter calibration system | |
Hu et al. | Aerial monocular 3d object detection | |
CN114581760B (en) | Equipment fault detection method and system for machine room inspection | |
CN116630267A (en) | Roadbed settlement monitoring method based on unmanned aerial vehicle and laser radar data fusion | |
CN114066985B (en) | Method for calculating hidden danger distance of power transmission line and terminal | |
CN104786227A (en) | Drop type switch replacing control system and method based on high-voltage live working robot | |
CN113723389B (en) | Pillar insulator positioning method and device | |
CN115082538A (en) | System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection | |
CN112665523B (en) | Combined measurement method for complex profile | |
CN105303580A (en) | Identification system and method of panoramic looking-around multi-camera calibration rod | |
CN110781720B (en) | Object identification method based on image processing and multi-sensor fusion | |
CN103260008A (en) | Projection converting method from image position to actual position | |
CN116309851B (en) | Position and orientation calibration method for intelligent park monitoring camera | |
CN112488022A (en) | Panoramic monitoring method, device and system | |
CN104858877A (en) | Automatic replacement control system for high-voltage line drop switch and control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |