CN113538487A - Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction - Google Patents

Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction Download PDF

Info

Publication number
CN113538487A
CN113538487A CN202110795807.9A CN202110795807A CN113538487A CN 113538487 A CN113538487 A CN 113538487A CN 202110795807 A CN202110795807 A CN 202110795807A CN 113538487 A CN113538487 A CN 113538487A
Authority
CN
China
Prior art keywords
dimensional
point cloud
cloud data
bounding box
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110795807.9A
Other languages
Chinese (zh)
Inventor
朱吕甫
朱兆亚
朱兆喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jushi Technology Co ltd
Original Assignee
Anhui Jushi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jushi Technology Co ltd filed Critical Anhui Jushi Technology Co ltd
Priority to CN202110795807.9A priority Critical patent/CN113538487A/en
Publication of CN113538487A publication Critical patent/CN113538487A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a virtual three-dimensional model perimeter control, in particular to a virtual three-dimensional perimeter control algorithm based on multi-camera three-dimensional reconstruction, which is used for carrying out two-dimensional detection on a two-dimensional image, acquiring the position of an object in the two-dimensional image, generating a two-dimensional bounding box, predicting the three-dimensional bounding box in three-dimensional point cloud data, filtering the three-dimensional point cloud data in the three-dimensional bounding box, identifying the physical center of the filtered three-dimensional point cloud data, and carrying out perimeter detection and identification on a virtual three-dimensional model based on the physical center and the three-dimensional bounding box; the technical scheme provided by the invention can effectively overcome the defect that the virtual three-dimensional model cannot be accurately detected and identified in the prior art.

Description

Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction
Technical Field
The invention relates to perimeter management and control of a virtual three-dimensional model, in particular to a virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction.
Background
In recent years, with the development of deep learning and computer vision, a large number of two-dimensional object detection algorithms have been proposed and widely applied to various visual products. However, for applications such as unmanned driving, mobile robots, and virtual reality, the two-dimensional detection technology is far from meeting practical requirements. In order to provide more accurate target position and edge detection information, three-dimensional target detection technology has become an important research hotspot, and the purpose of the technology is to capture an interested target in a real three-dimensional scene and give information of the absolute position, size and orientation of the interested target in a real world coordinate system.
This problem is now of increasing interest to scholars, as radar provides reliable depth information that can be used to accurately locate an object and determine its shape. Generally, according to the type of data relied upon, there can be divided into a three-dimensional detection method based on radar data and a three-dimensional detection method based on image data. The method based on the radar point cloud data can realize a three-dimensional target detection task with higher precision, but the method has obvious defects: the method depends on hardware equipment excessively, is expensive in manufacturing cost and has no portability, and application scenes of the method are limited seriously. The method based on the image data has convenient data acquisition and wide sources, so that the monocular camera-based three-dimensional target detection research has better application prospect.
With the development of computer technology, image processing is becoming more and more common. Edge detection is a fundamental problem in image processing and computer vision, and can provide important basic information for other computer vision tasks, such as semantic segmentation, instance segmentation, object tracking, and the like. However, most of the current edge detection is edge detection of two-dimensional images, few technologies can solve the problem of edge detection of virtual three-dimensional models, and the accuracy of the edge detection of few virtual three-dimensional models is also low.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects in the prior art, the invention provides a virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction, which can effectively overcome the defect that the prior art cannot accurately detect and identify the edge of a virtual three-dimensional model.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
a virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction comprises the following steps:
s1, carrying out two-dimensional detection on the two-dimensional image, acquiring the position of the object in the two-dimensional image, and generating a two-dimensional boundary frame;
s2, predicting a three-dimensional bounding box in the three-dimensional point cloud data;
s3, filtering the three-dimensional point cloud data in the three-dimensional bounding box, and identifying the physical center of the filtered three-dimensional point cloud data;
and S4, carrying out perimeter detection and identification on the virtual three-dimensional model based on the physical center and the three-dimensional bounding box.
Preferably, the two-dimensional detection of the two-dimensional image in S1, acquiring the position of the object in the two-dimensional image, and generating the two-dimensional bounding box includes:
and performing two-dimensional detection and depth estimation on the two-dimensional image by using a convolutional neural network, performing visual identification on the two-dimensional image, and converting the two-dimensional image into three-dimensional point cloud data according to depth information.
Preferably, the predicting of the three-dimensional bounding box in the three-dimensional point cloud data in S2 includes:
obtaining the position information of the interested region in each two-dimensional bounding box, zooming the image in the interested region, extracting the features by using a convolution neural network to obtain the feature map of the interested region, predicting the three-dimensional bounding box according to the feature map, and obtaining the three-dimensional point cloud data contained in the three-dimensional bounding box.
Preferably, the obtaining the location information of the region of interest in each two-dimensional bounding box includes:
and calculating the depth data of each two-dimensional bounding box according to the depth information so as to obtain the position information of the interest region in the two-dimensional bounding box.
Preferably, the predicting of the three-dimensional bounding box in the three-dimensional point cloud data in S2 includes:
and mapping the three-dimensional point cloud data to a two-dimensional image, obtaining a three-dimensional boundary frame in the three-dimensional point cloud data according to the two-dimensional boundary frame, and obtaining the three-dimensional point cloud data contained in the three-dimensional boundary frame.
Preferably, the mapping the three-dimensional point cloud data into the two-dimensional image includes:
and determining the relative position relationship between the two-dimensional image and the coordinate origin of the three-dimensional point cloud data, and mapping the three-dimensional point cloud data to corresponding pixels in the two-dimensional image one by one.
Preferably, the mapping the three-dimensional point cloud data into the two-dimensional image includes:
and mapping the three-dimensional point cloud data to the two-dimensional images according to the position relationship between the three-dimensional point cloud data and the two-dimensional images, filtering the point cloud data which exceeds the visual range in each two-dimensional image, and combining the filtered mapped two-dimensional images.
Preferably, the filtering of the three-dimensional point cloud data in the three-dimensional bounding box in S3 includes:
and reducing the three-dimensional bounding box according to the set zoom value, and deleting the three-dimensional point cloud data outside the reduced three-dimensional bounding box.
Preferably, the identifying the physical center of the filtered three-dimensional point cloud data in S3 includes:
and calculating an average value of three-dimensional coordinate values of the reserved three-dimensional point cloud data, and taking the average value of the three-dimensional coordinate values as a physical center of the three-dimensional point cloud data, namely the center position of an object.
(III) advantageous effects
Compared with the prior art, the virtual three-dimensional perimeter control algorithm based on multi-camera three-dimensional reconstruction provided by the invention can accurately predict the three-dimensional bounding box in the three-dimensional point cloud data, can filter the three-dimensional point cloud data in the three-dimensional bounding box, identifies the physical center of the filtered three-dimensional point cloud data, effectively reduces the calculation amount for judging the physical center, and can perform perimeter detection and identification on a virtual three-dimensional model more quickly and accurately based on the physical center and the three-dimensional bounding box.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction is disclosed, as shown in FIG. 1, and comprises the following steps:
s1, carrying out two-dimensional detection on the two-dimensional image, acquiring the position of the object in the two-dimensional image, and generating a two-dimensional boundary frame;
s2, predicting a three-dimensional bounding box in the three-dimensional point cloud data;
s3, filtering the three-dimensional point cloud data in the three-dimensional bounding box, and identifying the physical center of the filtered three-dimensional point cloud data;
and S4, carrying out perimeter detection and identification on the virtual three-dimensional model based on the physical center and the three-dimensional bounding box.
Carrying out two-dimensional detection on the two-dimensional image, acquiring the position of an object in the two-dimensional image, and generating a two-dimensional bounding box, wherein the two-dimensional bounding box comprises the following steps:
and performing two-dimensional detection and depth estimation on the two-dimensional image by using a convolutional neural network, performing visual identification on the two-dimensional image, and converting the two-dimensional image into three-dimensional point cloud data according to depth information.
In the technical scheme of the application, the three-dimensional bounding box in the three-dimensional point cloud data is predicted by two methods.
The first method comprises the following steps: obtaining the position information of the interested region in each two-dimensional bounding box, zooming the image in the interested region, extracting the features by using a convolution neural network to obtain the feature map of the interested region, predicting the three-dimensional bounding box according to the feature map, and obtaining the three-dimensional point cloud data contained in the three-dimensional bounding box.
Obtaining location information of the region of interest in each two-dimensional bounding box, including:
and calculating the depth data of each two-dimensional bounding box according to the depth information so as to obtain the position information of the interest region in the two-dimensional bounding box.
After feature extraction is carried out by using a convolutional neural network, a feature map of an interested area is obtained, and then a three-dimensional bounding box in the three-dimensional point cloud data is predicted through a PointNet network.
The second method comprises the following steps: and mapping the three-dimensional point cloud data to a two-dimensional image, obtaining a three-dimensional boundary frame in the three-dimensional point cloud data according to the two-dimensional boundary frame, and obtaining the three-dimensional point cloud data contained in the three-dimensional boundary frame.
The method comprises the following steps of mapping three-dimensional point cloud data into two-dimensional images, wherein two methods are adopted according to different quantities of the two-dimensional images:
for a single two-dimensional image: determining the relative position relationship between the two-dimensional image and the coordinate origin of the three-dimensional point cloud data, and mapping the three-dimensional point cloud data to corresponding pixels in the two-dimensional image one by one;
for a set of two-dimensional images: and mapping the three-dimensional point cloud data to the two-dimensional images according to the position relationship between the three-dimensional point cloud data and the two-dimensional images, filtering the point cloud data which exceeds the visual range in each two-dimensional image, and combining the filtered mapped two-dimensional images.
Filtering the three-dimensional point cloud data in the three-dimensional bounding box, comprising:
and reducing the three-dimensional bounding box according to the set zoom value, and deleting the three-dimensional point cloud data outside the reduced three-dimensional bounding box.
Identifying a physical center of the filtered three-dimensional point cloud data, comprising:
and calculating an average value of three-dimensional coordinate values of the reserved three-dimensional point cloud data, and taking the average value of the three-dimensional coordinate values as a physical center of the three-dimensional point cloud data, namely the center position of an object.
According to the technical scheme, the three-dimensional point cloud data in the three-dimensional boundary frame are filtered, the physical center of the filtered three-dimensional point cloud data is identified, the calculated amount of judging the physical center can be effectively reduced, and the virtual three-dimensional model can be rapidly and accurately subjected to perimeter detection and identification based on the physical center and the three-dimensional boundary frame.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. A virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction is characterized in that: the method comprises the following steps:
s1, carrying out two-dimensional detection on the two-dimensional image, acquiring the position of the object in the two-dimensional image, and generating a two-dimensional boundary frame;
s2, predicting a three-dimensional bounding box in the three-dimensional point cloud data;
s3, filtering the three-dimensional point cloud data in the three-dimensional bounding box, and identifying the physical center of the filtered three-dimensional point cloud data;
and S4, carrying out perimeter detection and identification on the virtual three-dimensional model based on the physical center and the three-dimensional bounding box.
2. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction as claimed in claim 1, characterized in that: in S1, performing two-dimensional detection on the two-dimensional image, acquiring a position of the object in the two-dimensional image, and generating a two-dimensional bounding box, including:
and performing two-dimensional detection and depth estimation on the two-dimensional image by using a convolutional neural network, performing visual identification on the two-dimensional image, and converting the two-dimensional image into three-dimensional point cloud data according to depth information.
3. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction according to claim 2, characterized in that: predicting a three-dimensional bounding box in the three-dimensional point cloud data in S2, including:
obtaining the position information of the interested region in each two-dimensional bounding box, zooming the image in the interested region, extracting the features by using a convolution neural network to obtain the feature map of the interested region, predicting the three-dimensional bounding box according to the feature map, and obtaining the three-dimensional point cloud data contained in the three-dimensional bounding box.
4. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction as claimed in claim 3, characterized in that: the obtaining of the position information of the region of interest in each two-dimensional bounding box includes:
and calculating the depth data of each two-dimensional bounding box according to the depth information so as to obtain the position information of the interest region in the two-dimensional bounding box.
5. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction according to claim 2, characterized in that: predicting a three-dimensional bounding box in the three-dimensional point cloud data in S2, including:
and mapping the three-dimensional point cloud data to a two-dimensional image, obtaining a three-dimensional boundary frame in the three-dimensional point cloud data according to the two-dimensional boundary frame, and obtaining the three-dimensional point cloud data contained in the three-dimensional boundary frame.
6. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction as claimed in claim 5, characterized in that: the mapping of the three-dimensional point cloud data into the two-dimensional image comprises:
and determining the relative position relationship between the two-dimensional image and the coordinate origin of the three-dimensional point cloud data, and mapping the three-dimensional point cloud data to corresponding pixels in the two-dimensional image one by one.
7. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction as claimed in claim 5, characterized in that: the mapping of the three-dimensional point cloud data into the two-dimensional image comprises:
and mapping the three-dimensional point cloud data to the two-dimensional images according to the position relationship between the three-dimensional point cloud data and the two-dimensional images, filtering the point cloud data which exceeds the visual range in each two-dimensional image, and combining the filtered mapped two-dimensional images.
8. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction according to claim 3 or 5, characterized in that: filtering the three-dimensional point cloud data in the three-dimensional bounding box in S3, including:
and reducing the three-dimensional bounding box according to the set zoom value, and deleting the three-dimensional point cloud data outside the reduced three-dimensional bounding box.
9. The virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction as claimed in claim 8, characterized in that: identifying the physical center of the filtered three-dimensional point cloud data in S3 includes:
and calculating an average value of three-dimensional coordinate values of the reserved three-dimensional point cloud data, and taking the average value of the three-dimensional coordinate values as a physical center of the three-dimensional point cloud data, namely the center position of an object.
CN202110795807.9A 2021-07-14 2021-07-14 Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction Pending CN113538487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110795807.9A CN113538487A (en) 2021-07-14 2021-07-14 Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110795807.9A CN113538487A (en) 2021-07-14 2021-07-14 Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction

Publications (1)

Publication Number Publication Date
CN113538487A true CN113538487A (en) 2021-10-22

Family

ID=78127934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110795807.9A Pending CN113538487A (en) 2021-07-14 2021-07-14 Virtual three-dimensional perimeter management and control algorithm based on multi-camera three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN113538487A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN111598770A (en) * 2020-05-15 2020-08-28 弗徕威智能机器人科技(上海)有限公司 Object detection method and device based on three-dimensional data and two-dimensional image
CN112800524A (en) * 2021-02-05 2021-05-14 河北工业大学 Pavement disease three-dimensional reconstruction method based on deep learning
CN112989469A (en) * 2021-03-19 2021-06-18 深圳市智绘科技有限公司 Building roof model construction method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN111598770A (en) * 2020-05-15 2020-08-28 弗徕威智能机器人科技(上海)有限公司 Object detection method and device based on three-dimensional data and two-dimensional image
CN112800524A (en) * 2021-02-05 2021-05-14 河北工业大学 Pavement disease three-dimensional reconstruction method based on deep learning
CN112989469A (en) * 2021-03-19 2021-06-18 深圳市智绘科技有限公司 Building roof model construction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
US10719727B2 (en) Method and system for determining at least one property related to at least part of a real environment
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN109559330B (en) Visual tracking method and device for moving target, electronic equipment and storage medium
CN110599522B (en) Method for detecting and removing dynamic target in video sequence
CN110751097B (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
Sun et al. Fast motion object detection algorithm using complementary depth image on an RGB-D camera
Nair Camera-based object detection, identification and distance estimation
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN106558069A (en) A kind of method for tracking target and system based under video monitoring
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
CN112183431A (en) Real-time pedestrian number statistical method and device, camera and server
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113435367A (en) Social distance evaluation method and device and storage medium
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination