CN117152734A - Target-based image identification method, laser radar and system - Google Patents

Target-based image identification method, laser radar and system Download PDF

Info

Publication number
CN117152734A
CN117152734A CN202311089331.2A CN202311089331A CN117152734A CN 117152734 A CN117152734 A CN 117152734A CN 202311089331 A CN202311089331 A CN 202311089331A CN 117152734 A CN117152734 A CN 117152734A
Authority
CN
China
Prior art keywords
data
target
point cloud
acquiring
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311089331.2A
Other languages
Chinese (zh)
Inventor
李辉
陈鸿群
朱子耕
陈申元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Onwing Information Technology Co Ltd
Original Assignee
Shanghai Onwing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Onwing Information Technology Co Ltd filed Critical Shanghai Onwing Information Technology Co Ltd
Priority to CN202311089331.2A priority Critical patent/CN117152734A/en
Publication of CN117152734A publication Critical patent/CN117152734A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses an image identification method, a laser radar and a system based on targets, wherein the image identification method comprises the following steps: acquiring three-dimensional scanning data of a region to be detected, wherein the three-dimensional scanning data comprises point cloud data of a target; searching target point cloud data in the three-dimensional scanning data; acquiring the position of target point cloud data in three-dimensional scanning data; and acquiring point cloud data of the marked line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data. According to the application, the on-site axis is used as a reference, and the design drawing and the construction site data are associated, so that the fusion of forward and reverse data is realized, and the data consistency is improved. Therefore, the automatic material recovery device replaces the repeated work of the traditional manual site in an automatic mode, improves the data accuracy, reduces the problems of material factory return processing and the like caused by calculation errors and the like of site manual work, and saves time and material loss.

Description

Target-based image identification method, laser radar and system
Technical Field
The application relates to an image identification method, a laser radar and a system based on targets.
Background
With the rapid development of the real estate industry in recent years, the scale of new house decoration and stock old house renovation is increased day by day, and the development of the decoration industry is directly accelerated. With the increase of market competition, people have increased the quality of decoration design and construction. However, due to the size difference of the degree of failure existing between the conceptual design and the construction site, the concept in the conceptual design cannot be well landed on the site.
In order to compensate the problem, the traditional decoration construction mode introduces a deepened design link. After the conceptual design drawing is provided, after deepening the understanding of the design key and intention of the drawing, the reproduction field compares whether the field size meets the design size in the drawing according to the axis left by civil engineering. If the deviation is found to be smaller, the adjustment is directly calculated and carried out manually on the construction site according to the established rule; if the deviation is found to be larger, the design institute needs to return to the modification drawing of the design institute.
The process needs at least 2 workers to cooperate, and the actual position after the maximum square finding is calculated while measuring and recording according to the design drawing by using measuring tools such as a leveling instrument, a tape measure and the like on site. When an on-site indeterminate event is encountered, a deep designer is also required to support remotely. The whole deepening design process is time-consuming, labor-consuming and heavy in repeated workload. When the dimension calculated according to the deepened design on the site is found to be unable to be installed in the subsequent construction, even the problem of material returning to the factory may occur.
Disclosure of Invention
The application aims to overcome the defects of time consumption, labor consumption and large repeated workload of the whole deepened design process in the prior art, and provides the target-based image recognition method, the laser radar and the system which replace the repeated work of the traditional manual site in an automatic mode, improve the data accuracy, reduce the problems of material factory return processing and the like caused by calculation errors and the like of the site manual work, and save time and material loss.
The application solves the technical problems by the following technical scheme:
a target-based image recognition method, the image recognition method comprising:
acquiring three-dimensional scanning data of a region to be detected, wherein the three-dimensional scanning data comprises point cloud data of a target;
searching target point cloud data in the three-dimensional scanning data;
acquiring the position of target point cloud data in three-dimensional scanning data;
and acquiring point cloud data of the marked line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data.
Preferably, the image recognition method includes:
marking on a marking line of the area to be measured by using a target, wherein the marking line comprises an elevation line and/or an axis in a house;
scanning the region to be detected by using a laser radar to acquire the three-dimensional scanning data;
and for a target data point, acquiring the position of the target data point in the region to be detected by utilizing the position relation between the target data point and the point cloud data of the mark line.
Preferably, the target is a spherical target or a hemispherical target, the center of the target is aligned to a marking line to realize marking in a region to be detected, and the image recognition method comprises:
searching target point cloud data in the three-dimensional scanning data;
acquiring target spherical surface data by utilizing target point cloud data;
acquiring target sphere center point cloud coordinates according to target sphere data;
acquiring point cloud data of a mark line marked by a target according to point cloud coordinates of a center of a target sphere
Or alternatively, the first and second heat exchangers may be,
the target is a circular plane target, the circle center of the target is utilized to align with a mark line so as to realize marking in a region to be detected, and the image identification method comprises the following steps:
searching target point cloud data in the three-dimensional scanning data;
acquiring target circular plane data by utilizing target point cloud data;
acquiring the cloud coordinates of the circle center point of the target according to the circular plane data of the target;
and acquiring point cloud data of the marked line marked by the target according to the point cloud coordinates of the center point of the target.
Preferably, the marking line includes an elevation line and an axis in a house, and the image recognition method includes:
and marking on the mark line intersection point of the region to be detected by using the target.
For a target on an elevation line, acquiring the point cloud coordinates of the center of a target ball of the elevation line, and acquiring point cloud data of the elevation line according to the Z-axis coordinates of the point cloud coordinates of the center of the target ball of the elevation line;
and acquiring the point cloud coordinates of the center point of the axis target for the targets on the axis, and acquiring the point cloud data of the axis according to the abscissa and the ordinate of the point cloud coordinates of the center point of the axis target.
The application also provides a maximized direction finding method realized by the image recognition method, wherein the region to be detected is a room, and the maximized direction finding method comprises the following steps:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
or alternatively, the first and second heat exchangers may be,
acquiring the maximum square in the point cloud data of the room ground by utilizing the point cloud data of the mark line;
preferably, the obtaining the maximum cube in the point cloud data of the room by using the point cloud data of the mark line includes:
for a wall surface of the room, acquiring the most salient point coordinates along the direction of a target axis perpendicular to the wall surface;
acquiring wall surface plane data perpendicular to the axis of the target according to the coordinates of the most salient points;
and acquiring a plurality of cuboid spaces according to the wall surface plane data of each wall surface, and selecting the cuboid space with the largest volume from all cuboid spaces as the maximum cuboid.
Preferably, the obtaining the maximum cube in the point cloud data of the room by using the point cloud data of the mark line includes:
projecting wall point cloud data of the room onto a horizontal plane;
for one wall surface data on the horizontal plane, acquiring the minimum distance between the wall surface data and the target axis as the most salient point coordinate;
obtaining wall surface linear data parallel to the target axis according to the most salient point coordinates;
obtaining a plurality of rectangles according to the wall surface linear data of each wall surface, and selecting the rectangle with the largest area from all rectangles;
and obtaining the maximum square body by using the rectangle with the largest area.
Preferably, the method for maximizing the direction finding comprises the following steps:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
acquiring four vertex data points of the bottom surface of the maximum cube;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
acquiring the actual position of the maximum cube in the region to be detected by using the position of the vertex data point in the region to be detected;
or alternatively, the first and second heat exchangers may be,
the maximizing and finding method comprises the following steps:
and acquiring the maximum square in the point cloud data of the ground of the room by using the point cloud data of the mark line.
Acquiring four vertex data points of a maximum square;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
and acquiring the actual position of the maximum square in the region to be detected by using the position of the vertex data point in the region to be detected.
The application also provides a laser radar, which comprises a processing module, wherein the processing module is used for realizing the image recognition method; and/or the processing module is used for realizing the maximum direction finding method.
The application also provides a processing terminal which is used for realizing the image recognition method; and/or the processing terminal is used for realizing the maximum direction finding method.
The application also provides a laser radar system, which comprises a laser radar and a processing unit, wherein the processing unit is used for realizing the image recognition method; and/or the processing unit is configured to implement the maximization of the direction finding method as described above.
On the basis of conforming to the common knowledge in the field, the above preferred conditions can be arbitrarily combined to obtain the preferred examples of the application.
The application has the positive progress effects that:
according to the target-based image identification method, the laser radar and the system provided by the application, the on-site axis is used as a reference, the design drawing and the construction site data are associated, the fusion of forward and reverse data is realized, and the data consistency is improved. Therefore, the automatic material recovery device replaces the repeated work of the traditional manual site in an automatic mode, improves the data accuracy, reduces the problems of material factory return processing and the like caused by calculation errors and the like of site manual work, and saves time and material loss.
Drawings
Fig. 1 is a schematic structural diagram of a model presented by three-dimensional scan data in embodiment 1 of the present application.
Fig. 2 is another schematic structural diagram of a model presented by three-dimensional scan data in embodiment 1 of the present application.
Fig. 3 is another schematic structural diagram of a model presented by three-dimensional scan data in embodiment 1 of the present application.
Fig. 4 is a flowchart of an image recognition method according to embodiment 1 of the present application.
Fig. 5 is another flowchart of the image recognition method according to embodiment 1 of the present application.
Fig. 6 is a flowchart of the method for maximizing the direction finding according to embodiment 1 of the present application.
Detailed Description
The application is further illustrated by means of the following examples, which are not intended to limit the scope of the application.
Example 1
The embodiment provides a laser radar system, which comprises a laser radar and a processing unit.
The processing unit comprises an acquisition module, a searching module, a calculating module and a processing module:
the acquisition module is used for acquiring three-dimensional scanning data of the region to be detected, the three-dimensional scanning data comprise point cloud data of a target, the three-dimensional scanning data are acquired through laser radar scanning, and the laser radar acquires the three-dimensional scanning data and then transmits the three-dimensional scanning data to the processing unit.
The processing unit in this embodiment may be a desktop computer, a tablet computer notebook computer, etc. with computing capability, or may be a server with computing capability,
the server may include an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
The searching module is used for searching target point cloud data in the three-dimensional scanning data;
the computing module is used for acquiring the position of target point cloud data in the three-dimensional scanning data;
the processing module is used for acquiring point cloud data of a mark line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
In the description and claims of the present application, the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of method steps or modules is not necessarily limited to those steps or modules that are expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or modules that are not expressly listed.
Further, the laser radar system is configured to: the target is used for marking on a marking line of the area to be measured, wherein the marking line comprises an elevation line and an axis in a house, and the operation step can be completed manually.
And the laser radar is used for scanning the region to be detected to acquire the three-dimensional scanning data, and the laser radar acquires the three-dimensional scanning data and then transmits the three-dimensional scanning data to the processing unit.
For a target data point, the processing module is further configured to acquire a position of the target data point in the area to be measured by using a position relationship between the target data point and the point cloud data of the mark line.
The three-dimensional scanning data of the axis contour line is utilized to correspond to the position of the region to be detected (in reality), so that the real position of a certain image point in the three-dimensional scanning data is positioned.
For example, for a target marker point in the three-dimensional scan data, calculating the distance between the marker point and each axis can easily find the corresponding actual position in a real room according to the actual axis position.
Referring to fig. 1, the distance from the b point to the first axis 11 is 65mm and the distance from the second axis 12 is 50mm in the three-dimensional scan data, and the actual position of the b point can be obtained by measuring the b point on site according to the axes.
The target is a spherical target or a hemispherical target, and the center of the target is aligned with a marking line to realize marking in a region to be detected.
The searching module is used for searching target point cloud data in the three-dimensional scanning data;
the calculation module is used for acquiring target spherical surface data by utilizing target point cloud data and acquiring target sphere center point cloud coordinates according to the target spherical surface data;
the processing module is used for acquiring point cloud data of the marked line marked by the target according to the point cloud coordinates of the center of the target.
The spherical center coordinates can be obtained rapidly through preset radius data, and can also be obtained through calculation after fitting the spherical surface.
The purpose of the target is to provide a tag so that the point cloud captured by the lidar scanner can be aligned with the actual scene.
One of the core approaches is to place targets on the axis and decoration 1 meter line (elevation line) to reconstruct the entire control network on our virtual model. These targets may be of different forms, in principle, that they may be conveniently placed at the job site and identified by a lidar scanner.
Examples of some targets are illustrated below, but the actual placement of targets is not limited to the several ways listed below:
target example 1:1. the whole outline is round or cylindrical; 2. the center of the target is provided with a center origin, so that a user can conveniently align the intersection point of the 1 meter line and the axis of the decoration on the construction site.
Target example 2:1. a 3D spherical target that can be placed on the ground; 2. the spherical target has a base with its center positioned at the intersection of the axes.
Target example 3: spraying on the wall surface.
In particular, the marking line comprises an elevation line and an axis in the house.
The laser radar system is used for: and marking on the mark line intersection point of the region to be detected by using the target. This step may be achieved manually.
For targets on the elevation line, the processing unit is used for acquiring the point cloud coordinates of the spherical center of the targets on the elevation line and acquiring point cloud data of the elevation line according to the Z-axis coordinates of the point cloud coordinates of the spherical center of the targets on the elevation line;
for the targets on the axis, the processing unit is used for acquiring the point cloud coordinates of the centers of the axes and the targets, and acquiring the point cloud data of the axis according to the abscissa and the ordinate of the point cloud coordinates of the centers of the axes.
Referring to fig. 2, the locations of the axes and the decoration 1-meter lines are identified based on the target (indicated by a numbered circle) of the intersection of the axes 21 and the 1-meter lines 22 in the room.
At least 2 or more targets are placed at the above mentioned positions, and by means of plane fitting, the axis and the decoration 1 meter line can be identified from the point cloud. And these locations are very easy to locate at the job site.
And establishing a space coordinate system for the acquired data on site, identifying the circle center positions of the targets No. 1, no. 2 and No. 3 and calculating the coordinates of the circle center positions.
1 meter line: determined from the z-axis coordinates of targets 1, 2, and 3.
Axis 1 (reference numeral 23): a line parallel to the x-axis (y-axis coordinates are target number 3 y-axis coordinates).
Axis 2 (reference numeral 24): a line parallel to the y-axis (x-axis coordinates 1 or 2 target x-axis coordinates).
By using the attitude acquisition sensor of the laser radar, one axis can be acquired and a vertical plane passing through the axis can be acquired at the same time, so that other axes can be acquired.
The embodiment can realize forward and reverse fusion of data, and the forward and reverse fusion involves two main steps: forward direction: the target is placed on the axis and the decoration 1 meter line, so that the positioning information can be reflected in the point cloud;
reverse: more useful data is automatically calculated based on the point cloud to aid in the construction of the site, which allows any user/staff to map the digital data to the actual construction site.
Further, the area to be measured is a room, and the processing module is further configured to:
and acquiring the maximum cube in the point cloud data of the room by using the point cloud data of the mark line.
Wherein the surface of the maximum square body is perpendicular to or parallel to the marking line.
That is, one face of the maximum cube is perpendicular or parallel to the sign line.
Further, the processing module is further configured to:
for a wall surface of the room, acquiring the most salient point coordinates along the direction of a target axis perpendicular to the wall surface;
acquiring wall surface plane data perpendicular to the axis of the target according to the coordinates of the most salient points;
and acquiring a plurality of cuboid spaces according to the wall surface plane data of each wall surface, and selecting the cuboid space with the largest volume from all cuboid spaces as the maximum cuboid.
In other embodiments, the processing module is further configured to:
projecting wall point cloud data of the room onto a horizontal plane;
for one wall surface data on the horizontal plane, acquiring the minimum distance between the wall surface data and the target axis as the most salient point coordinate;
obtaining wall surface linear data parallel to the target axis according to the most salient point coordinates;
obtaining a plurality of rectangles according to the wall surface linear data of each wall surface, and selecting the rectangle with the largest area from all rectangles;
and obtaining the maximum square body by using the rectangle with the largest area.
The processing module is further configured to:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
acquiring four vertex data points of the bottom surface of the maximum cube;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
acquiring the actual position of the maximum cube in the region to be detected by using the position of the vertex data point in the region to be detected;
referring to fig. 3, after the axis is identified in the point cloud model, many application scenarios can be derived. Among these, the most widely used is the maximization of the finding. The goal is to construct a room profile that is aligned with the axis and of maximum area.
Based on the principle of maximum direction finding of the axis: along the direction of the axis, the closed space is formed based on the most salient point of the wall surface. As shown in the figures to be described below,
the first line 31 is the actual line of the wall surface; the second line 32 is the line after the maximum square finding is completed; the third line is an axis.
Referring to fig. 4, with the laser radar system, the embodiment further provides a target-based image recognition method, which includes:
step 100, acquiring three-dimensional scanning data of an area to be detected, wherein the three-dimensional scanning data comprises point cloud data of a target;
step 101, searching target point cloud data in three-dimensional scanning data;
102, acquiring the position of target point cloud data in three-dimensional scanning data;
and 103, acquiring point cloud data of a mark line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data.
Wherein, step 100 includes:
marking on a marking line of the area to be measured by using a target, wherein the marking line comprises an elevation line and/or an axis in a house;
scanning the region to be detected by using a laser radar to acquire the three-dimensional scanning data;
step 103 is followed by:
and for a target data point, acquiring the position of the target data point in the region to be detected by utilizing the position relation between the target data point and the point cloud data of the mark line.
Further, the target is a spherical target or a hemispherical target, and the marking in the area to be measured is realized by aligning the center of the target with the marking line, and step 102 specifically includes:
acquiring target spherical surface data by utilizing target point cloud data;
acquiring target sphere center point cloud coordinates according to target sphere data;
step 103 is specifically to obtain point cloud data of a marking line marked by the target according to the point cloud coordinates of the center of the target.
Further, the marking lines include an elevation line (one meter line) and an axis in the house. Step 100 is preceded by:
and marking on the mark line intersection point of the region to be detected by using the target.
Referring to fig. 5, step 103 specifically includes:
step 1031, acquiring the point cloud coordinates of the spherical center of the target on the elevation line for the target on the elevation line, and acquiring the point cloud data of the elevation line according to the Z-axis coordinates of the point cloud coordinates of the spherical center of the target on the elevation line;
step 1032, for the targets on the axis, acquiring the point cloud coordinates of the centers of the axes and the point cloud data of the axis according to the abscissa and ordinate of the point cloud coordinates of the centers of the axes.
Referring to fig. 6, with the image recognition method, the embodiment further provides a maximizing direction finding method, where the maximizing direction finding method includes:
step 200, acquiring three-dimensional scanning data of an area to be detected, wherein the three-dimensional scanning data comprises point cloud data of a target;
step 201, searching target point cloud data in three-dimensional scanning data;
step 202, acquiring the position of target point cloud data in three-dimensional scanning data;
and 203, acquiring point cloud data of a mark line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data.
And 204, acquiring a maximum cube in the point cloud data of the room by using the point cloud data of the mark line, wherein the surface of the maximum cube is perpendicular to or parallel to the mark line.
The step 204 specifically includes:
for a wall surface of the room, acquiring the most salient point coordinates along the direction of a target axis perpendicular to the wall surface;
acquiring wall surface plane data perpendicular to the axis of the target according to the coordinates of the most salient points;
and acquiring a plurality of cuboid spaces according to the wall surface plane data of each wall surface, and selecting the cuboid space with the largest volume from all cuboid spaces as the maximum cuboid.
In other embodiments step 204 comprises:
projecting wall point cloud data of the room onto a horizontal plane;
for one wall surface data on the horizontal plane, acquiring the minimum distance between the wall surface data and the target axis as the most salient point coordinate;
obtaining wall surface linear data parallel to the target axis according to the most salient point coordinates;
obtaining a plurality of rectangles according to the wall surface linear data of each wall surface, and selecting the rectangle with the largest area from all rectangles;
and obtaining the maximum square body by using the rectangle with the largest area.
The maximizing and finding method comprises the following steps:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
acquiring four vertex data points of the bottom surface of the maximum cube;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
the actual position of the maximum cube in the region to be detected is obtained by using the position of the vertex data point in the region to be detected.
In the embodiment, the on-site axis is used as a reference, and the design drawing and the construction site data are associated, so that the fusion of the forward and reverse data is realized, and the data consistency is improved. Therefore, the automatic material recovery device replaces the repeated work of the traditional manual site in an automatic mode, improves the data accuracy, reduces the problems of material factory return processing and the like caused by calculation errors and the like of site manual work, and saves time and material loss.
Example 2
This embodiment is substantially the same as embodiment 1, except that:
the present embodiment provides a lidar comprising a processing module for implementing the functions of the processing unit as in embodiment 1.
The laser radar is used for realizing the image recognition method and the maximized party finding method.
In the embodiment, the on-site axis is used as a reference, and the design drawing and the construction site data are associated, so that the fusion of the forward and reverse data is realized, and the data consistency is improved. Therefore, the automatic material recovery device replaces the repeated work of the traditional manual site in an automatic mode, improves the data accuracy, reduces the problems of material factory return processing and the like caused by calculation errors and the like of site manual work, and saves time and material loss.
Example 3
This embodiment is substantially the same as embodiment 1, except that:
the present embodiment provides a processing terminal for realizing the functions of the processing unit as in embodiment 1.
The processing terminal is used for realizing the image recognition method and the maximized party finding method.
Example 4
This embodiment is substantially the same as embodiment 1, except that:
the target is a circular plane target, the circle center of the target is utilized to align with a mark line so as to realize marking in a region to be detected, and the processing module is further used for:
searching target point cloud data in the three-dimensional scanning data;
acquiring target circular plane data by utilizing target point cloud data;
acquiring the cloud coordinates of the circle center point of the target according to the circular plane data of the target;
and acquiring point cloud data of the marked line marked by the target according to the point cloud coordinates of the center point of the target.
Correspondingly, the image recognition method comprises the following steps:
searching target point cloud data in the three-dimensional scanning data;
acquiring target circular plane data by utilizing target point cloud data;
acquiring the cloud coordinates of the circle center point of the target according to the circular plane data of the target;
and acquiring point cloud data of the marked line marked by the target according to the point cloud coordinates of the center point of the target.
The processing module is further configured to:
acquiring the maximum square in the point cloud data of the room ground by utilizing the point cloud data of the mark line;
acquiring four vertex data points of a maximum square;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
and acquiring the actual position of the maximum square in the region to be detected by using the position of the vertex data point in the region to be detected.
Correspondingly, the method for maximizing the party comprises the following steps:
acquiring the maximum square in the point cloud data of the room ground by utilizing the point cloud data of the mark line;
acquiring four vertex data points of a maximum square;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
and acquiring the actual position of the maximum square in the region to be detected by using the position of the vertex data point in the region to be detected.
In the embodiment, the on-site axis is used as a reference, and the design drawing and the construction site data are associated, so that the fusion of the forward and reverse data is realized, and the data consistency is improved. Therefore, the automatic material recovery device replaces the repeated work of the traditional manual site in an automatic mode, improves the data accuracy, reduces the problems of material factory return processing and the like caused by calculation errors and the like of site manual work, and saves time and material loss.
While specific embodiments of the application have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and the scope of the application is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the application, but such changes and modifications fall within the scope of the application.

Claims (11)

1. A target-based image recognition method, the image recognition method comprising:
acquiring three-dimensional scanning data of a region to be detected, wherein the three-dimensional scanning data comprises point cloud data of a target;
searching target point cloud data in the three-dimensional scanning data;
acquiring the position of target point cloud data in three-dimensional scanning data;
and acquiring point cloud data of the marked line marked by the target by utilizing the position of the target point cloud data in the three-dimensional scanning data.
2. The image recognition method according to claim 1, wherein the image recognition method comprises:
marking on a marking line of the area to be measured by using a target, wherein the marking line comprises an elevation line and/or an axis in a house;
scanning the region to be detected by using a laser radar to acquire the three-dimensional scanning data;
and for a target data point, acquiring the position of the target data point in the region to be detected by utilizing the position relation between the target data point and the point cloud data of the mark line.
3. The image recognition method according to claim 2, wherein the target is a spherical target or a hemispherical target, and the marking in the area to be measured is achieved by aligning a mark line with a center of the target, the image recognition method comprising:
searching target point cloud data in the three-dimensional scanning data;
acquiring target spherical surface data by utilizing target point cloud data;
acquiring target sphere center point cloud coordinates according to target sphere data;
acquiring point cloud data of a mark line marked by a target according to the point cloud coordinates of the center of the target;
or alternatively, the first and second heat exchangers may be,
the target is a circular plane target, the circle center of the target is utilized to align with a mark line so as to realize marking in a region to be detected, and the image identification method comprises the following steps:
searching target point cloud data in the three-dimensional scanning data;
acquiring target circular plane data by utilizing target point cloud data;
acquiring the cloud coordinates of the circle center point of the target according to the circular plane data of the target;
and acquiring point cloud data of the marked line marked by the target according to the point cloud coordinates of the center point of the target.
4. The image recognition method of claim 3, wherein the marking line includes an elevation line and an axis in the house, the image recognition method comprising:
and marking on the mark line intersection point of the region to be detected by using the target.
For a target on an elevation line, acquiring the point cloud coordinates of the center of a target ball of the elevation line, and acquiring point cloud data of the elevation line according to the Z-axis coordinates of the point cloud coordinates of the center of the target ball of the elevation line;
and acquiring the point cloud coordinates of the center point of the axis target for the targets on the axis, and acquiring the point cloud data of the axis according to the abscissa and the ordinate of the point cloud coordinates of the center point of the axis target.
5. A maximized method for finding a room by using the image recognition method as set forth in any one of claims 1 to 4, wherein the area to be measured is a room, the maximized method comprising:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
or alternatively, the first and second heat exchangers may be,
and acquiring the maximum square in the point cloud data of the ground of the room by using the point cloud data of the mark line.
6. The method of maximizing the square of claim 5 wherein the obtaining the maximum square in the point cloud data of the room using the point cloud data of the sign line comprises:
for a wall surface of the room, acquiring the most salient point coordinates along the direction of a target axis perpendicular to the wall surface;
acquiring wall surface plane data perpendicular to the axis of the target according to the coordinates of the most salient points;
and acquiring a plurality of cuboid spaces according to the wall surface plane data of each wall surface, and selecting the cuboid space with the largest volume from all cuboid spaces as the maximum cuboid.
7. The method of maximizing the square of claim 5 wherein the obtaining the maximum square in the point cloud data of the room using the point cloud data of the sign line comprises:
projecting wall point cloud data of the room onto a horizontal plane;
for one wall surface data on the horizontal plane, acquiring the minimum distance between the wall surface data and the target axis as the most salient point coordinate;
obtaining wall surface linear data parallel to the target axis according to the most salient point coordinates;
obtaining a plurality of rectangles according to the wall surface linear data of each wall surface, and selecting the rectangle with the largest area from all rectangles;
and obtaining the maximum square body by using the rectangle with the largest area.
8. The maximized method of finding as set forth in claim 5, wherein the maximized method of finding includes:
acquiring the maximum square body in the point cloud data of the room by utilizing the point cloud data of the mark line;
acquiring four vertex data points of the bottom surface of the maximum cube;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
acquiring the actual position of the maximum cube in the region to be detected by using the position of the vertex data point in the region to be detected;
or alternatively, the first and second heat exchangers may be,
the maximizing and finding method comprises the following steps:
acquiring the maximum square in the point cloud data of the room ground by utilizing the point cloud data of the mark line;
acquiring four vertex data points of a maximum square;
for a vertex data point, acquiring the position of the vertex data point in the region to be detected by utilizing the position relation between the vertex data point and the point cloud data of the mark line;
and acquiring the actual position of the maximum square in the region to be detected by using the position of the vertex data point in the region to be detected.
9. A lidar, comprising a processing module for implementing the image recognition method according to any of claims 1 to 4; and/or the processing module is configured to implement the maximization method according to any one of claims 5 to 8.
10. A processing terminal, wherein the processing terminal is configured to implement the image recognition method according to any one of claims 1 to 4; and/or the processing terminal is used for realizing the maximum direction finding method according to any one of claims 5 to 8.
11. A lidar system, characterized in that the lidar system comprises a lidar and a processing unit for implementing the image recognition method according to any of claims 1 to 4; and/or the processing unit is configured to implement a maximization method according to any one of claims 5 to 8.
CN202311089331.2A 2023-08-27 2023-08-27 Target-based image identification method, laser radar and system Pending CN117152734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311089331.2A CN117152734A (en) 2023-08-27 2023-08-27 Target-based image identification method, laser radar and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311089331.2A CN117152734A (en) 2023-08-27 2023-08-27 Target-based image identification method, laser radar and system

Publications (1)

Publication Number Publication Date
CN117152734A true CN117152734A (en) 2023-12-01

Family

ID=88883612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311089331.2A Pending CN117152734A (en) 2023-08-27 2023-08-27 Target-based image identification method, laser radar and system

Country Status (1)

Country Link
CN (1) CN117152734A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648750A (en) * 2024-01-25 2024-03-05 上海盎维信息技术有限公司 Automatic regulation method for space decoration finished surface size based on measured data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323223A (en) * 2006-05-31 2007-12-13 Toyota Motor Corp Generation device, generation method and generation program of maximum solid model which can be taken in prescribed space
US9053566B1 (en) * 2012-03-29 2015-06-09 Arthur Technologies Llc Real estate blueprint and panoramic video visualization
CN108256417A (en) * 2017-12-01 2018-07-06 西安电子科技大学 Architecture against regulations recognition methods based on outdoor scene Point Cloud Processing
CN113687365A (en) * 2021-06-30 2021-11-23 云南昆钢电子信息科技有限公司 Multi-height layer contour recognition and coordinate calculation method and system based on similar plane
CN113702985A (en) * 2021-06-28 2021-11-26 盎锐(上海)信息科技有限公司 Measuring method for actual measurement and laser radar
CN114627250A (en) * 2022-05-13 2022-06-14 武汉纺织大学 Human body standing posture three-dimensional reconstruction and measurement method based on Kinect

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323223A (en) * 2006-05-31 2007-12-13 Toyota Motor Corp Generation device, generation method and generation program of maximum solid model which can be taken in prescribed space
US9053566B1 (en) * 2012-03-29 2015-06-09 Arthur Technologies Llc Real estate blueprint and panoramic video visualization
CN108256417A (en) * 2017-12-01 2018-07-06 西安电子科技大学 Architecture against regulations recognition methods based on outdoor scene Point Cloud Processing
CN113702985A (en) * 2021-06-28 2021-11-26 盎锐(上海)信息科技有限公司 Measuring method for actual measurement and laser radar
CN113687365A (en) * 2021-06-30 2021-11-23 云南昆钢电子信息科技有限公司 Multi-height layer contour recognition and coordinate calculation method and system based on similar plane
CN114627250A (en) * 2022-05-13 2022-06-14 武汉纺织大学 Human body standing posture three-dimensional reconstruction and measurement method based on Kinect

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUAN, LINXI 等: "GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》, vol. 186, 25 April 2022 (2022-04-25) *
宁小娟 等: "由粗到精的室内场景布局划分与结构重建", 《激光与光电子学进展研》, vol. 58, no. 22, 30 November 2021 (2021-11-30) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117648750A (en) * 2024-01-25 2024-03-05 上海盎维信息技术有限公司 Automatic regulation method for space decoration finished surface size based on measured data
CN117648750B (en) * 2024-01-25 2024-06-04 上海盎维信息技术有限公司 Automatic regulation method for space decoration finished surface size based on measured data

Similar Documents

Publication Publication Date Title
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
US7605756B2 (en) System a method and an apparatus for performing wireless measurements, positioning and surface mapping by means of a portable coordinate system
CN108363386A (en) Position Method for Indoor Robot, apparatus and system based on Quick Response Code and laser
CN115560690B (en) Structure integral deformation analysis method based on three-dimensional laser scanning technology
US10997785B2 (en) System and method for collecting geospatial object data with mediated reality
CN117152734A (en) Target-based image identification method, laser radar and system
Mulahusić et al. Comparison and analysis of results of 3D modelling of complex cultural and historical objects using different types of terrestrial laser scanner
CN104766365A (en) Three-dimensional visualization method for engineering structure disease information
Bassier et al. Standalone terrestrial laser scanning for efficiently capturing AEC buildings for as-built BIM
CN111707235A (en) Ground object measuring method based on three-dimensional laser scanning technology
Balado et al. Automatic detection of surface damage in round brick chimneys by finite plane modelling from terrestrial laser scanning point clouds. Case Study of Bragança Dukes’ Palace, Guimarães, Portugal
CN103900535B (en) Towards camera 4 method for relocating that historical relic subtle change detects
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera
Jiang et al. Determination of construction site elevations using drone technology
CN113989447A (en) Three-dimensional model indoor and outdoor integrated construction method and system
CN109472869B (en) Settlement prediction method and system
Kochi et al. Development of 3D image measurement system and stereo‐matching method, and its archaeological measurement
CN113466791B (en) Laser mapping and positioning equipment and method
Baghani et al. Automatic hierarchical registration of aerial and terrestrial image-based point clouds
US11113528B2 (en) System and method for validating geospatial data collection with mediated reality
Deng et al. BIM-based indoor positioning technology using a monocular camera
CN112365369A (en) Method for automatically monitoring construction progress based on machine vision
CN117648750B (en) Automatic regulation method for space decoration finished surface size based on measured data
Wang et al. Example analysis of digital wireless mapping applied to construction engineering measurement
Kyseľ et al. Cadastral Survey of a Fishpond Using UAV Photogrammetry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination