CN117876555B - Efficient rendering method of three-dimensional model data based on POI retrieval - Google Patents

Efficient rendering method of three-dimensional model data based on POI retrieval Download PDF

Info

Publication number
CN117876555B
CN117876555B CN202410278302.9A CN202410278302A CN117876555B CN 117876555 B CN117876555 B CN 117876555B CN 202410278302 A CN202410278302 A CN 202410278302A CN 117876555 B CN117876555 B CN 117876555B
Authority
CN
China
Prior art keywords
determined
pixel points
dimensional
dividing lines
dividing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410278302.9A
Other languages
Chinese (zh)
Other versions
CN117876555A (en
Inventor
姚胜
马小云
张晓静
杨开泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Urban Development Resources Information Co ltd
Original Assignee
Xi'an Urban Development Resources Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Urban Development Resources Information Co ltd filed Critical Xi'an Urban Development Resources Information Co ltd
Priority to CN202410278302.9A priority Critical patent/CN117876555B/en
Publication of CN117876555A publication Critical patent/CN117876555A/en
Application granted granted Critical
Publication of CN117876555B publication Critical patent/CN117876555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Generation (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a three-dimensional model data efficient rendering method based on POI retrieval, which comprises the following steps: acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, determining boundary pixel points according to the probability that each three-dimensional pixel point belongs to a region boundary, performing straight line detection on all boundary pixel points to obtain dividing lines, dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through the dividing lines, taking spatial position information of the three-dimensional pixel points corresponding to two endpoints of all the dividing lines and representative color features of all the three-dimensional blocks as preloaded rendering information and storing the preloaded rendering information; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information. The invention improves the efficiency of the rendering method of the three-dimensional model, and can more smoothly present the rendering effect of the three-dimensional model when a user uses the rendering method.

Description

Efficient rendering method of three-dimensional model data based on POI retrieval
Technical Field
The invention relates to the technical field of image processing. More particularly, the invention relates to a three-dimensional model data efficient rendering method based on POI retrieval.
Background
For a three-dimensional model of a planning and design drawing constructed by using a computer technology, the three-dimensional model is widely applied in a plurality of fields because of the high reduction of actual scenes and details.
Conventionally, color rendering is performed on a three-dimensional model according to stored color information of each three-dimensional pixel point, and for a complex three-dimensional model, the number of three-dimensional pixel points to be rendered may be very large, resulting in a slow rendering speed.
In order to smoothly present the rendering effect of the three-dimensional model when a user uses the three-dimensional model, better user experience is provided, and the efficiency of the rendering method of the three-dimensional model needs to be improved.
Disclosure of Invention
To solve one or more of the above-described technical problems, the present invention provides aspects as follows.
A three-dimensional model data efficient rendering method based on POI (Point of Interest, interest points) retrieval comprises the following steps:
acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, wherein the three-dimensional pixel points have space position information and color rendering information;
determining a representative region of each three-dimensional pixel point by a region growing method according to the color rendering information and the spatial position information of each three-dimensional pixel point;
Determining the probability that each three-dimensional pixel belongs to the region boundary according to the overlapping condition between the edge pixel points of the representative region of each three-dimensional pixel;
Taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; performing straight line detection on all boundary pixel points to obtain a plurality of line segments; determining the preference of each line segment as a dividing line according to the probability that all three-dimensional pixel points on each line segment belong to the region boundary;
Taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined; obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined;
Dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the spatial position information of the three-dimensional pixel points corresponding to two endpoints of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information, storing the preloaded rendering information, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information, and storing the complete rendering information; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information.
In one embodiment, the determining the representative region of each voxel by the region growing method includes:
Combining the three-dimensional pixel points in the neighborhood of the seed point, which accords with the combination condition, into a growth area represented by the seed point, and continuing to combine the three-dimensional pixel points as new seed points until the new three-dimensional pixel points which accord with the combination condition do not exist, so as to obtain the growth area represented by the seed point, and taking the growth area as a representative area of the three-dimensional pixel points;
large of the neighborhood is as small as 3 x 3;
the merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
In one embodiment, the probability that each voxel belongs to a region boundary satisfies the relationship:
In the method, in the process of the invention, Representing the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, k takes over integers in the range of [1, N ], N represents the number of all three-dimensional pixel, and/(N)Representing the number of the three-dimensional pixel points including the kth three-dimensional pixel point in all the edge pixel points of the representative region in all the three-dimensional pixel points,/>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
In one embodiment, the performing straight line detection on all boundary pixel points to obtain a plurality of line segments includes:
Performing line detection on all boundary pixel points through a Hough transformation line detection algorithm to obtain a plurality of lines;
Dividing each straight line into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, wherein two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold value, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold value;
The method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments;
The non-boundary pixel points refer to three-dimensional pixel points with probability of belonging to the region boundary smaller than or equal to a preset first threshold value.
In one embodiment, the preference of each line segment as a parting line satisfies the expression:
Wherein Y represents the preference of the line segment as the dividing line, The j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
In one embodiment, the similarity between the two parting lines to be determined satisfies the expression:
wherein D represents the similarity between two dividing lines to be determined, Representing the difference in horizontal angle of two parting lines to be determined in a polar coordinate system,/>Representing the difference in vertical angles of two parting lines to be determined in a polar coordinate system,/>Representing the difference in distance between two parting lines to be determined in a polar coordinate system,/>Representing the difference in the abscissa of the midpoints of two parting lines to be determined,/>Representing the difference in the ordinate of the midpoints of two parting lines to be determined,/>Representing the difference in vertical coordinates of the midpoints of two parting lines to be determined, exp () represents an exponential function based on a natural constant,/>、/>、/>Representing the length, width and height, respectively, of the three-dimensional model of the planning plan.
In one embodiment, the obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined includes:
For any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
For any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
In one embodiment, the representative color feature of each three-dimensional block refers to a mean value of color rendering information of all three-dimensional pixel points in each three-dimensional block.
The invention has the beneficial effects that: dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and representative color features of all three-dimensional blocks as preloading rendering information, storing the preloading rendering information, and performing primary rendering on a three-dimensional model according to the stored preloading rendering information when a planning design diagram is displayed; the method has the advantages that a large number of three-dimensional pixel points needing to be rendered in a complex three-dimensional model can be converted into a small number of three-dimensional blocks, the efficiency of the rendering method of the three-dimensional model is improved, the rendering effect of the three-dimensional model can be more smoothly presented when a user uses the method, and better user experience is provided.
Furthermore, the invention takes the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning and design drawing as complete rendering information and stores the complete rendering information, when the user searches the interest points, detail rendering is carried out through the stored complete rendering information, the rendering effect of the three-dimensional model can be more smoothly presented when the user uses the three-dimensional model, and the experience of the user is improved.
Further, according to the overlapping condition among edge pixel points of a representative area of each three-dimensional pixel point, the probability that each three-dimensional pixel point belongs to an area boundary is determined, straight line detection is carried out on all boundary pixel points, the preference degree of each line segment as a dividing line is determined according to the obtained probability that all three-dimensional pixel points on each line segment belong to the area boundary, a line segment with the preference degree being larger than a preset second threshold value is used as a dividing line to be determined, a plurality of dividing lines are obtained from all dividing lines to be determined according to the similarity between every two dividing lines to be determined, and all three-dimensional pixel points are divided into a plurality of three-dimensional blocks through the dividing lines; the dividing line is determined based on the similar color distribution conditions of other three-dimensional pixel points around each three-dimensional pixel point, and the three-dimensional pixel points with similar colors can be divided into a three-dimensional block, so that the dividing result of the three-dimensional block is more in line with the actual condition of the planning design drawing, and better user experience is provided.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 schematically illustrates a flow chart of a method for efficiently rendering three-dimensional model data based on POI retrieval in the present invention;
fig. 2 schematically shows a neighborhood diagram in the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The embodiment of the invention discloses a three-dimensional model data efficient rendering method based on POI retrieval, which comprises the following steps of S1-S6 with reference to FIG. 1:
S1, acquiring all three-dimensional pixel points on a three-dimensional model of the planning design drawing.
Specifically, the three-dimensional model of the planning and design drawing is provided with a plurality of three-dimensional pixel points, and each three-dimensional pixel point is provided with space position information and color rendering information.
The spatial position information refers to coordinates of the three-dimensional pixel points, and the coordinates refer to coordinates of the three-dimensional pixel points on a horizontal axis, a vertical axis and a vertical axis of a Cartesian coordinate system.
The color rendering information corresponds to a color point in the CIE chromaticity diagram and is used for rendering the three-dimensional pixel point.
It should be noted that, the CIE chromaticity diagram is a color representation, which is a known technology and will not be described herein.
S2, determining the representative area of each three-dimensional pixel point by an area growth method according to the color rendering information and the spatial position information of each three-dimensional pixel point.
In order to divide all three-dimensional pixel points into a plurality of three-dimensional blocks with similar color distribution, the invention determines the representative area of each three-dimensional pixel point by an area growth method according to the color rendering information and the spatial position information of each three-dimensional pixel point.
Specifically, each three-dimensional pixel point is taken as a seed point, three-dimensional pixel points meeting merging conditions in the neighborhood of the seed point are merged into a growth area represented by the seed point, and the three-dimensional pixel points are taken as new seed points to continue merging until no new three-dimensional pixel points meeting merging conditions exist, so that the growth area represented by the seed point is obtained and is taken as a representative area of the three-dimensional pixel points.
The size of the neighborhood is 3 x 3, which includes 26 voxels adjacent to the center voxel, as shown in figure 2, a neighborhood schematic is shown in which the black dot represents the central voxel point and the white dot represents the voxel point adjacent to the central voxel point.
The merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
The macadam ellipse corresponding to the color rendering information is the macadam ellipse of the color point corresponding to the color rendering information in the CIE chromaticity diagram.
The chromaticity diagram and the macadam circle are well known techniques and will not be described in detail herein.
It should be noted that, the macadam ellipse includes color points that cannot be resolved by the common human eye, which is a known technique and will not be described herein.
And taking the three-dimensional pixel points positioned at the boundary of the representative region as edge pixel points of the representative region of each three-dimensional pixel point.
And S3, determining the probability that each three-dimensional pixel belongs to the boundary of the region according to the overlapping condition of the edge pixel points of the representative region of each three-dimensional pixel.
Specifically, the probability that a voxel belongs to a region boundary satisfies the expression:
In the method, in the process of the invention, Representing the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, k takes over integers in the range of [1, N ], N represents the number of all three-dimensional pixel, and/(N)Representing the number of the three-dimensional pixel points including the kth three-dimensional pixel point in all the edge pixel points of the representative region in all the three-dimensional pixel points,/>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
The more the edge pixel points of the representative region of the three-dimensional pixel points include the kth three-dimensional pixel point, the more the boundary of the representative region is overlapped at the kth three-dimensional pixel point, and the more the kth three-dimensional pixel point can distinguish the representative regions of the three-dimensional pixel points, the greater the probability that the kth three-dimensional pixel point belongs to the boundary of the region.
S4, carrying out straight line detection on all boundary pixel points to obtain a plurality of line segments; and determining the preference of each line segment as a dividing line according to the probability that all the three-dimensional pixel points on each line segment belong to the region boundary.
Taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; and taking the three-dimensional pixel points with the probability of belonging to the regional boundary smaller than or equal to a preset first threshold value as non-boundary pixel points.
The specific value of the first threshold value can be set according to the actual application scene and the requirement, and the first threshold value is set to be 0.5.
And carrying out straight line detection on all boundary pixel points through a Hough transform straight line detection algorithm to obtain a plurality of straight lines.
It should be noted that, since there may be non-boundary pixels in addition to the boundary pixels in the obtained straight line, the boundary pixels in the straight line are not all continuous, and therefore, the present invention divides the straight line into a plurality of line segments according to the non-boundary pixels in the straight line.
Specifically, each straight line is divided into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold; the method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments.
The specific value of the quantity threshold value can be set according to the actual application scene and the requirement, and the quantity threshold value is set to be 2.
In the hough transform line detection, three-dimensional pixel points need to be converted from a three-dimensional rectangular coordinate system to a polar coordinate system, and in the polar coordinate system, the spatial position information of each three-dimensional pixel point is as followsWherein/>Represents a horizontal angle,/>Representing vertical angle,/>Representing distance.
The preference of each line segment as a dividing line satisfies the expression:
Wherein Y represents the preference of the line segment as the dividing line, The j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
When judging whether or not to use a line segment as a dividing line, the more boundary pixels that constitute the line segment, the greater the probability that the boundary pixels that constitute the line segment belong to the region boundary, the more likely the line segment is to be the edge of the representative region of the plurality of three-dimensional pixels, and therefore, the sum of the probabilities that all the boundary pixels that constitute the line segment belong to the region boundaryThe larger the line segment is, the larger the preference Y of the line segment as the dividing line is.
S5, obtaining dividing lines to be determined from all the line segments according to the preference of each line segment as the dividing line, and obtaining a plurality of dividing lines from all the dividing lines to be determined according to the similarity between every two dividing lines to be determined.
And taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined.
The specific value of the second threshold value can be set according to the actual application scene and the requirement, and the second threshold value is set to 20.
Calculating the similarity between every two dividing lines to be determined, wherein the similarity between the two dividing lines to be determined meets the expression:
wherein D represents the similarity between two dividing lines to be determined, Representing the difference in horizontal angle of two parting lines to be determined in a polar coordinate system,/>Representing the difference in vertical angles of two parting lines to be determined in a polar coordinate system,/>Representing the difference in distance between two parting lines to be determined in a polar coordinate system,/>Representing the difference in the abscissa of the midpoints of two parting lines to be determined,/>Representing the difference in the ordinate of the midpoints of two parting lines to be determined,/>Representing the difference in vertical coordinates of the midpoints of two parting lines to be determined, exp () represents an exponential function based on a natural constant,/>、/>、/>Representing the length, width and height, respectively, of the three-dimensional model of the planning plan.
It should be noted that, for the two division lines to be determined, the smaller the difference in spatial position information in the polar coordinate system, the more similar the two division lines to be determined, i.e.The smaller the similarity D between the two dividing lines to be determined is, the larger; the purpose of determining the similarity between the two dividing lines to be determined is to screen dividing lines different from other dividing lines to be determined from all dividing lines to be determined according to the similarity between the dividing lines to be determined, and the nature of the dividing lines to be determined is a line segment, so that the two dividing lines to be determined may belong to the same straight line, and for the dividing lines to be determined which belong to the same straight line, the dividing lines to be determined which belong to the same straight line are located at different positions in the three-dimensional model, therefore, the dividing lines to be determined which belong to the same straight line are dividing lines of the area, but at the moment, the difference of spatial position information of the two dividing lines to be determined in the polar coordinate system is 0, the similarity D between the two dividing lines to be determined is large, so that when the dividing lines to be determined which belong to the same straight line are different from the other dividing lines to be determined are screened according to the similarity, the dividing results are inaccurate; therefore, the difference of the coordinates of the midpoints of the two dividing lines to be determined is used for determining the similarity between the two dividing lines to be determined, and the smaller the difference of the coordinates of the midpoints of the two dividing lines to be determined is, the more similar the two dividing lines to be determined are, namely/>The smaller the similarity D between the two division lines to be determined is, the larger.
According to the similarity between every two dividing lines to be determined, a plurality of dividing lines are obtained from all the dividing lines to be determined, and the method comprises the following steps:
For any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
For any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
Specific values of the third threshold and the fourth threshold can be set according to actual application scenes and requirements, and the invention sets the third threshold to 0.8 and the fourth threshold to 0.4.
According to the overlapping condition among edge pixel points of a representative area of each three-dimensional pixel point, determining the probability that each three-dimensional pixel point belongs to an area boundary, carrying out straight line detection on all boundary pixel points, determining the preference of each line segment as a dividing line according to the obtained probability that all three-dimensional pixel points on each line segment belong to the area boundary, taking the line segment with the preference larger than a preset second threshold value as a dividing line to be determined, obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined, and dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through the dividing lines; the dividing line is determined based on the similar color distribution conditions of other three-dimensional pixel points around each three-dimensional pixel point, and the three-dimensional pixel points with similar colors can be divided into a three-dimensional block, so that the dividing result of the three-dimensional block is more in line with the actual condition of the planning design drawing, and better user experience is provided.
S6, dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information and storing, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information and storing; when the planning and design diagram is displayed, the three-dimensional model is subjected to primary rendering according to the stored preloaded rendering information, and detail rendering is performed through the stored complete rendering information.
According to the dividing line, all three-dimensional pixel points are divided into a plurality of areas, and each area is used as a three-dimensional block.
And taking the average value of the color rendering information of all the three-dimensional pixel points in each three-dimensional block as the representative color characteristic of each three-dimensional block.
Taking the space position information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing as basic information and storing the basic information; three-dimensional pixel points corresponding to two endpoints of all dividing lines and representative color features of all three-dimensional blocks are used as preloaded rendering information and stored; and taking the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing as complete rendering information and storing the complete rendering information.
When the planning and design drawing is displayed, firstly, constructing a three-dimensional model of the planning and design drawing according to the stored basic information; performing primary rendering on the three-dimensional model according to the stored preloaded rendering information; when the user performs interest point search (Point of Interest, POI search), detail rendering is performed on the interest points through the stored complete rendering information.
Dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking space position information of the three-dimensional pixel points corresponding to two end points of all dividing lines and representative color features of all three-dimensional blocks as preloading rendering information, storing the preloading rendering information, and performing primary rendering on a three-dimensional model according to the stored preloading rendering information when a planning design diagram is displayed; the method has the advantages that a large number of three-dimensional pixel points needing to be rendered in a complex three-dimensional model can be converted into a small number of three-dimensional blocks, the efficiency of the rendering method of the three-dimensional model is improved, the rendering effect of the three-dimensional model can be more smoothly presented when a user uses the method, and better user experience is provided.
According to the invention, the color rendering information of all three-dimensional pixel points on the three-dimensional model of the planning design drawing is used as complete rendering information and stored, when the user searches the interest points, detail rendering is carried out through the stored complete rendering information, the rendering effect of the three-dimensional model can be more smoothly presented when the user uses the three-dimensional model, and the experience of the user is improved.
In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.

Claims (6)

1. A three-dimensional model data efficient rendering method based on POI retrieval is characterized by comprising the following steps:
acquiring all three-dimensional pixel points on a three-dimensional model of a planning design drawing, wherein the three-dimensional pixel points have space position information and color rendering information;
determining a representative region of each three-dimensional pixel point by a region growing method according to the color rendering information and the spatial position information of each three-dimensional pixel point;
Determining the probability that each three-dimensional pixel belongs to the region boundary according to the overlapping condition between the edge pixel points of the representative region of each three-dimensional pixel;
Taking three-dimensional pixel points with probability of belonging to the regional boundary larger than a preset first threshold value as boundary pixel points; performing straight line detection on all boundary pixel points to obtain a plurality of line segments; determining the preference of each line segment as a dividing line according to the probability that all three-dimensional pixel points on each line segment belong to the region boundary;
Taking a line segment with the preference larger than a preset second threshold value as a dividing line to be determined; obtaining a plurality of dividing lines from all dividing lines to be determined according to the similarity between every two dividing lines to be determined;
Dividing all three-dimensional pixel points into a plurality of three-dimensional blocks through dividing lines, taking the spatial position information of the three-dimensional pixel points corresponding to two endpoints of all dividing lines and the representative color characteristics of all three-dimensional blocks as preloaded rendering information, storing the preloaded rendering information, and taking the color rendering information of all three-dimensional pixel points on a three-dimensional model of a planning design drawing as complete rendering information, and storing the complete rendering information; when the planning and design diagram is displayed, performing primary rendering on the three-dimensional model according to the stored preloaded rendering information, and performing detail rendering through the stored complete rendering information;
The similarity between the two division lines to be determined satisfies the expression:
wherein D represents the similarity between two dividing lines to be determined, Representing the difference in horizontal angle of two parting lines to be determined in a polar coordinate system,/>Representing the difference in vertical angles of two parting lines to be determined in a polar coordinate system,/>Representing the difference in distance between two parting lines to be determined in a polar coordinate system,/>Representing the difference in the abscissa of the midpoints of two parting lines to be determined,/>Representing the difference in the ordinate of the midpoints of two parting lines to be determined,/>Representing the difference in vertical coordinates of the midpoints of two parting lines to be determined, exp () represents an exponential function based on a natural constant,/>、/>、/>Respectively representing the length, width and height of the three-dimensional model of the planning and design drawing;
according to the similarity between every two dividing lines to be determined, a plurality of dividing lines are obtained from all the dividing lines to be determined, and the method comprises the following steps:
For any one of all the dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold value, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing lines with the largest comprehensive similarity in the two to-be-determined dividing lines with the largest similarity in all the to-be-determined dividing lines, wherein the number of all the to-be-determined dividing lines is A;
For any one of the remaining A-1 dividing lines to be determined, taking the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-1 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
for any one of the remaining A-2 dividing lines to be determined, the number of the dividing lines to be determined, of which the similarity with the dividing line to be determined is larger than a preset third threshold, is used as the comprehensive similarity of the dividing lines to be determined; removing the to-be-determined dividing line with the largest comprehensive similarity from the two to-be-determined dividing lines with the largest similarity in the remaining A-2 to-be-determined dividing lines, wherein A represents the number of all to-be-determined dividing lines;
and the like, stopping iteration until the maximum value of the similarity between every two dividing lines to be determined among the remaining A-a dividing lines to be determined is smaller than a preset fourth threshold value, and taking the remaining A-a dividing lines to be determined as dividing lines, wherein A represents the number of all dividing lines to be determined.
2. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the determining the representative region of each three-dimensional pixel point by the region growing method comprises:
Combining the three-dimensional pixel points in the neighborhood of the seed point, which accords with the combination condition, into a growth area represented by the seed point, and continuing to combine the three-dimensional pixel points as new seed points until the new three-dimensional pixel points which accord with the combination condition do not exist, so as to obtain the growth area represented by the seed point, and taking the growth area as a representative area of the three-dimensional pixel points;
large of the neighborhood is as small as 3 x 3;
the merging condition is that the color rendering information of the three-dimensional pixel point is in a macadam ellipse corresponding to the color rendering information of the seed point.
3. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the probability that each three-dimensional pixel belongs to a region boundary satisfies a relation:
In the method, in the process of the invention, Representing the probability that the kth three-dimensional pixel belongs to the boundary of the region, k represents the sequence number of the three-dimensional pixel, k takes over integers in the range of [1, N ], N represents the number of all three-dimensional pixel, and/(N)Representing the number of the three-dimensional pixel points including the kth three-dimensional pixel point in all the edge pixel points of the representative region in all the three-dimensional pixel points,/>The number of the three-dimensional pixel points including the ith three-dimensional pixel point in all the three-dimensional pixel points in all the edge pixel points of the representative area is represented, i represents the serial number of the three-dimensional pixel points, and max () represents a maximum function.
4. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the performing straight line detection on all boundary pixels to obtain a plurality of line segments comprises:
Performing line detection on all boundary pixel points through a Hough transformation line detection algorithm to obtain a plurality of lines;
Dividing each straight line into a plurality of adjacent line segments according to boundary pixel points and non-boundary pixel points on the straight line, wherein two endpoints of the line segments are required to be boundary pixel points, the number of continuous non-boundary pixel points on the line segments is smaller than a number threshold value, meanwhile, no boundary pixel points exist between the two adjacent line segments, and the number of non-boundary pixel points between the two adjacent line segments is not smaller than the number threshold value;
The method for acquiring the non-boundary pixel point between the two adjacent line segments comprises the following steps: two endpoints of a first line segment in two adjacent line segments are respectively marked as an endpoint R1 and an endpoint R2, two endpoints of a second line segment are respectively marked as an endpoint W1 and an endpoint W2, the distance between the endpoint R1 and the endpoint W1, the distance between the endpoint R1 and the endpoint W2, the distance between the endpoint R2 and the endpoint W1 and the distance between the endpoint R2 and the endpoint W2 are respectively calculated, and all non-boundary pixel points between two endpoints corresponding to the minimum value in the four distances are used as non-boundary pixel points between the two adjacent line segments;
The non-boundary pixel points refer to three-dimensional pixel points with probability of belonging to the region boundary smaller than or equal to a preset first threshold value.
5. The efficient rendering method of three-dimensional model data based on POI retrieval as claimed in claim 1, wherein the preference degree of each line segment as a dividing line satisfies the expression:
Wherein Y represents the preference of the line segment as the dividing line, The j-th boundary pixel point forming the line segment is represented by the probability of belonging to the region boundary, j represents the serial number of the boundary pixel point, and M represents the number of the boundary pixel points forming the line segment.
6. The efficient rendering method of three-dimensional model data based on POI retrieval as defined in claim 1, wherein the representative color characteristic of each three-dimensional block refers to a mean value of color rendering information of all three-dimensional pixel points in each three-dimensional block.
CN202410278302.9A 2024-03-12 2024-03-12 Efficient rendering method of three-dimensional model data based on POI retrieval Active CN117876555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410278302.9A CN117876555B (en) 2024-03-12 2024-03-12 Efficient rendering method of three-dimensional model data based on POI retrieval

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410278302.9A CN117876555B (en) 2024-03-12 2024-03-12 Efficient rendering method of three-dimensional model data based on POI retrieval

Publications (2)

Publication Number Publication Date
CN117876555A CN117876555A (en) 2024-04-12
CN117876555B true CN117876555B (en) 2024-05-31

Family

ID=90595278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410278302.9A Active CN117876555B (en) 2024-03-12 2024-03-12 Efficient rendering method of three-dimensional model data based on POI retrieval

Country Status (1)

Country Link
CN (1) CN117876555B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005104042A1 (en) * 2004-04-20 2005-11-03 The Chinese University Of Hong Kong Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc
WO2022095714A1 (en) * 2020-11-09 2022-05-12 中兴通讯股份有限公司 Image rendering processing method and apparatus, storage medium, and electronic device
CN115358919A (en) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN116310056A (en) * 2023-03-02 2023-06-23 网易(杭州)网络有限公司 Rendering method, rendering device, equipment and medium for three-dimensional model
CN116681860A (en) * 2023-06-09 2023-09-01 不鸣科技(杭州)有限公司 Feature line rendering method and device, electronic equipment and storage medium
CN116740249A (en) * 2023-08-15 2023-09-12 湖南马栏山视频先进技术研究院有限公司 Distributed three-dimensional scene rendering system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005104042A1 (en) * 2004-04-20 2005-11-03 The Chinese University Of Hong Kong Block-based fragment filtration with feasible multi-gpu acceleration for real-time volume rendering on standard pc
WO2022095714A1 (en) * 2020-11-09 2022-05-12 中兴通讯股份有限公司 Image rendering processing method and apparatus, storage medium, and electronic device
CN115358919A (en) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN116310056A (en) * 2023-03-02 2023-06-23 网易(杭州)网络有限公司 Rendering method, rendering device, equipment and medium for three-dimensional model
CN116681860A (en) * 2023-06-09 2023-09-01 不鸣科技(杭州)有限公司 Feature line rendering method and device, electronic equipment and storage medium
CN116740249A (en) * 2023-08-15 2023-09-12 湖南马栏山视频先进技术研究院有限公司 Distributed three-dimensional scene rendering system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于渲染图像角度结构特征的三维模型检索方法;刘志;潘晓彬;;计算机科学;20181115(第S2期);全文 *

Also Published As

Publication number Publication date
CN117876555A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
Song et al. Road extraction using SVM and image segmentation
CN109509199B (en) Medical image organization intelligent segmentation method based on three-dimensional reconstruction
CN107169487B (en) Salient object detection method based on superpixel segmentation and depth feature positioning
CN106447676B (en) A kind of image partition method based on fast density clustering algorithm
CN110796038B (en) Hyperspectral remote sensing image classification method combined with rapid region growing superpixel segmentation
CN109064471B (en) Three-dimensional point cloud model segmentation method based on skeleton
CN112036231B (en) Vehicle-mounted video-based lane line and pavement indication mark detection and identification method
CN115049925B (en) Field ridge extraction method, electronic device and storage medium
CN104835114A (en) Image self-adaptive display method
US7822224B2 (en) Terrain map summary elements
CN111461036B (en) Real-time pedestrian detection method using background modeling to enhance data
CN111833362A (en) Unstructured road segmentation method and system based on superpixel and region growing
CN111915628A (en) Single-stage instance segmentation method based on prediction target dense boundary points
CN106203451A (en) A kind of image area characteristics extracts and the method for characteristic matching
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
CN116703916B (en) Washing water quality monitoring method based on image processing
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
CN117876555B (en) Efficient rendering method of three-dimensional model data based on POI retrieval
CN112365517A (en) Super-pixel segmentation method based on image color and density characteristics
CN107784269A (en) A kind of method and system of 3D frame of video feature point extraction
CN111127622A (en) Three-dimensional point cloud outlier rejection method based on image segmentation
CN106611418A (en) Image segmentation algorithm
CN110009654B (en) Three-dimensional volume data segmentation method based on maximum flow strategy
CN110728688B (en) Energy optimization-based three-dimensional mesh model segmentation method and system
CN109522813B (en) Improved random walk algorithm based on pedestrian salient features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant