CN113409446A - Blind person assisted vision processing method and device - Google Patents

Blind person assisted vision processing method and device Download PDF

Info

Publication number
CN113409446A
CN113409446A CN202110642244.XA CN202110642244A CN113409446A CN 113409446 A CN113409446 A CN 113409446A CN 202110642244 A CN202110642244 A CN 202110642244A CN 113409446 A CN113409446 A CN 113409446A
Authority
CN
China
Prior art keywords
point cloud
cloud data
neighborhood
data set
axis coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110642244.XA
Other languages
Chinese (zh)
Other versions
CN113409446B (en
Inventor
周润敏
程莉
陈渊朴
柯泽康
林淑贤
伍飞燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN202110642244.XA priority Critical patent/CN113409446B/en
Publication of CN113409446A publication Critical patent/CN113409446A/en
Application granted granted Critical
Publication of CN113409446B publication Critical patent/CN113409446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a blind auxiliary vision processing method and a blind auxiliary vision processing device, wherein the method comprises the following steps: obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar, and collecting all the original point cloud data to obtain an original point cloud data set; carrying out three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, wherein the point cloud picture comprises a plurality of barrier distances; and when the distance between any obstacle is smaller than the preset distance value, generating a voice prompt instruction, and carrying out voice prompt according to the voice prompt instruction. The invention can solve the problem of low ranging precision caused by weather with low penetrability or a night environment, improves the identification degree of obstacles, improves the blind guiding precision, ensures the safety of the blind, has simple design and wide popularization range, and can prevent the travel of the blind from being limited.

Description

Blind person assisted vision processing method and device
Technical Field
The invention mainly relates to the technical field of blind person assistance, in particular to a blind person assisted vision processing method and device.
Background
The World Health Organization (WHO) by the daycare professor thyefors, WHO is the chief anti-blindness and anti-deafness program, points out: china is the most blind country in the world, and about 500 million blind people exist. Accounting for 18% of the world's blind population, the number of Chinese blinds has already exceeded the population of countries such as Denmark, Finland or Norway. Thylefrors, in turn, emphasizes that about 45 million people are blind every year in China, which means that a new blind person appears almost every day every minute. Thylefrors states that the blind in china is expected to increase 4 times by 2020 if the current trend is allowed to continue unchanged.
Today that science and technology is developed, the blind guide mode that provides for the blind person is diversified, and current blind guide mode has certain not enough: the training period required by the guide dog is long and the difficulty is high. The blind guiding robot has complex technology, high price and limited popularization; the traditional walking stick has single function.
Most of the existing blind guiding devices adopt a binocular ranging method, and the method cannot provide active light, so that the method can be used for the following steps: in weather with poor penetrability such as fog and rain or in the night environment, the blind guiding precision has deviation due to the problem of light, so that the conditions for the blind to go out have certain limitation.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a blind auxiliary vision processing method and device.
The technical scheme for solving the technical problems is as follows: a blind auxiliary vision processing method comprises the following steps:
obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar, and collecting all the original point cloud data to obtain an original point cloud data set;
performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, wherein the point cloud picture comprises a plurality of barrier distances;
and when the distance between any one of the obstacles is smaller than a preset distance value, generating a voice prompt instruction, and carrying out voice prompt according to the voice prompt instruction.
Another technical solution of the present invention for solving the above technical problems is as follows: a blind-aided vision processing apparatus comprising:
the system comprises a data set obtaining module, a data acquisition module and a data acquisition module, wherein the data set obtaining module is used for obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar and collecting all the original point cloud data to obtain an original point cloud data set;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, and the point cloud picture comprises a plurality of barrier distances;
and the voice prompt module is used for generating a voice prompt instruction when the distance between any one of the obstacles is smaller than a preset distance value, and carrying out voice prompt according to the voice prompt instruction.
Another technical solution of the present invention for solving the above technical problems is as follows: a blind-aided vision processing device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the blind-aided vision processing method is realized.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, storing a computer program which, when executed by a processor, implements a blind-aided vision processing method as described above.
The invention has the beneficial effects that: the method has the advantages that a plurality of barrier distances are obtained through three-dimensional reconstruction of the original point cloud data set, voice prompt instructions are obtained through judgment of the barrier distances, voice prompt is carried out, the problem that distance measurement accuracy is not high due to weather with poor penetrability or a night environment can be solved, the identification degree of the barriers is improved, the accuracy of blind guiding is also improved, the safety of the blind is guaranteed, the design is simple, the popularization range is wide, and the blind can not be limited when going out.
Drawings
Fig. 1 is a schematic flow chart of a blind-aided vision processing method according to an embodiment of the present invention;
fig. 2 is a block diagram of a blind-aided vision processing apparatus according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a blind-person assisted vision processing method according to an embodiment of the present invention.
As shown in fig. 1, a blind auxiliary vision processing method includes the following steps:
obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar, and collecting all the original point cloud data to obtain an original point cloud data set;
performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, wherein the point cloud picture comprises a plurality of barrier distances;
and when the distance between any one of the obstacles is smaller than a preset distance value, generating a voice prompt instruction, and carrying out voice prompt according to the voice prompt instruction.
It should be understood that when all the obstacle distances are greater than or equal to the preset distance value, no voice prompt is made.
It should be understood that ranging is based on the dtof method by the preset lidar.
Specifically, the method for obtaining the original point cloud data of the multiple objects to be detected from the preset laser radar specifically comprises the following steps:
the method is that a dtof (direct time of flight) method is used for directly measuring the flight time, and the basic principle is that a light emitter inside a laser radar continuously emits light pulses to an object to be measured, then a light detector inside the laser radar receives the light pulses reflected back from the object, and the distance of the object to be measured is calculated by detecting the back-and-forth flight time of the light pulses.
The data obtained by the optical detector is point cloud, and the three-dimensional coordinate information of each point is stored in the point cloud. The point clouds are arranged in a matrix. The obtained three-dimensional coordinate data is expressed by mathematical expressions as follows:
M={P(m,n),m=0,1,2…,X-1,n=0,1,2…,Y-1},
where M represents the acquired point cloud data point set (i.e., the plurality of raw point cloud data). X is the number of rows of the scan point set, m is the row index value, Y is the number of columns of the scan point set, and n is the column index value. P (m, n) ═ x, y, z denotes the coordinates of points in the point cloud collection.
Specifically, when scene understanding or virtual reconstruction is performed on an environment in front of a user, a preset distance threshold (i.e., the preset distance value) is set, and the point cloud data (i.e., the original point cloud data) can be regarded as data on an independent object, and after point cloud segmentation, the attribute characteristics such as the shape and size of the target can be conveniently determined. Along with the traveling of the user, when the obstacle on the front side is larger than the threshold value (namely the preset distance value), voice reminding is not performed; if the front obstacle is smaller than the threshold value (namely the preset distance value), the user is reminded that the front obstacle is located and the user pays attention to safety, and then the obstacle avoidance route which is planned preferentially is broadcasted in a voice mode to continuously guide the user to move forwards.
In the above-mentioned embodiment, obtain a plurality of barrier distances through the three-dimensional reconstruction to the original point cloud data set, obtain the voice prompt instruction through the judgement to the barrier distance to voice prompt can solve because of the not high problem of range finding precision that weather or the night environment that penetrability is strong caused, has improved the discernment degree of barrier, has also improved the precision of leading the blind, has ensured the safety of blind, and the design is comparatively simple, and the popularization face is wider, also can let the blind trip can not receive the restriction.
Optionally, as an embodiment of the present invention, the three-dimensional reconstruction of the original point cloud data set to obtain the point cloud image includes:
s1: removing outliers from the original point cloud data set to obtain a target point cloud data set;
s2: carrying out data simplification processing on the target point cloud data set to obtain a point cloud data center of gravity;
s3: smoothing the gravity centers of the point cloud data to obtain a plurality of smoothed point cloud data;
s4: performing feature extraction on the plurality of smoothed point cloud data to obtain a plurality of point cloud feature points;
s5: registering the plurality of point cloud feature points to obtain a plurality of registered point cloud data, and collecting all the registered point cloud data to obtain a point cloud data set to be processed;
s6: if the preset iteration number m is not reached, returning to the step S1 until the preset iteration number m is reached, thereby obtaining a plurality of point cloud data sets to be processed, and performing point cloud compensation processing on the point cloud data set to be processed obtained in the mth time according to the plurality of point cloud data sets to be processed obtained in the previous m-1 times to obtain a final point cloud data set;
s7: performing point cloud segmentation on the final point cloud data set by using a Euclidean algorithm to obtain a plurality of final point cloud data subsets;
s8: and performing surface reconstruction on the plurality of final point cloud data subsets to obtain a reconstructed curved surface, and taking the reconstructed curved surface as a point cloud picture.
It should be understood that, in step S3, the point cloud data (i.e. the center of gravity of the point cloud data) is smoothed by using a moving least squares method, which specifically includes:
a fitting function is established by calculation and then a weight function is selected. By selecting different weight functions, the point cloud data processed by the moving least square method (namely the smoothed point cloud data) is smoother.
The expression of the fitting function is:
Figure BDA0003108404630000051
wherein a (x) ═ a1(x),a2(x),L,am(x)]TRepresenting a function of the coordinate x for the coefficient to be solved. p (x) is a basis function, a perfect polynomial of order k, m is the number of terms of the basis function, and the quadratic base is usually expressed as: [1, u, v, u2,v2,vu]TIn summary, the expression of the fitting function f (x) can be expressed as:
f(x)=a0(x)+a1(x)u+a2(x)v+a3(x)u2+a4(x)v2+a5(x)uv,
consider node xiThe weighted dispersion paradigm of (a) is:
Figure BDA0003108404630000061
wherein n denotes the number of nodes in the region of influence, f (x) denotes a fitting function, yiX is represented byiNode value of, yi=y(xi),w(x-wi) Is node xiThe weight function of (2).
One of the characteristics of the weight function is smoothness, the fitting function inherits the continuity of the weight function, and if the weight function is m-order continuous, the fitting function is also m-order continuous, and can smooth the point cloud data (namely the center of gravity of the point cloud data).
It should be understood that, in step S6, if m is 5, if m is not up to 5 times, the process returns to step S1 until 5 times is reached, so as to obtain a plurality of point cloud data sets to be processed, and the point cloud data set to be processed obtained in the 5 th time is subjected to point cloud compensation processing according to the plurality of point cloud data sets to be processed obtained in the previous 4 times, so as to obtain a final point cloud data set.
Specifically, step S7 is specifically:
firstly, searching adjacent points of each point cloud data (namely the final point cloud data in the final point cloud data set), comparing Euclidean distances between each point and all adjacent points, then classifying the minimum value into one class, iterating the Euclidean distances among other classes, and judging whether the next scanning point and the point belong to the same class or not according to a preset threshold value. If the distance between the current scanning point and the previous scanning point is within the preset threshold range, the current point is gathered into the class of the previous scanning point; otherwise, setting the current scanning point as a new cluster. The above steps are repeated until all points are clustered into different classes. And when the Euclidean distance between any two classes is smaller than a set threshold value, finishing point cloud segmentation based on the Euclidean distance so as to achieve the purpose of segmenting and distinguishing different obstacles.
In the above embodiment, the three-dimensional reconstruction of the original point cloud data set obtains the point cloud picture, can solve the problem that the range finding precision is not high because of the weather of not strong penetrability or the night environment, has improved the recognition degree of barrier, has also improved the precision of leading the blind, has ensured the safety of blind person, and the design is comparatively simple, and the popularization face is wider, also can let the blind person go on a journey can not receive the restriction, and facilitate the use, and the function is more comprehensive.
Optionally, as an embodiment of the present invention, the process of step S1 includes:
removing outliers from the original point cloud data set by a first formula to obtain a target point cloud data set, wherein the first formula is as follows:
Figure BDA0003108404630000071
wherein the content of the first and second substances,
Figure BDA0003108404630000072
wherein the content of the first and second substances,
Figure BDA0003108404630000073
wherein omegacFor the original point cloud data set PcIn the neighborhood of (1), PcFor the original point cloud dataset, k1 is the neighborhood region ΩcNumber of interior point cloud data, P0Has a spatial coordinate of (0, 0), PiFor the original point cloud data set PcM1 as mean value, S2Is the variance of the received signal and the received signal,
Figure BDA0003108404630000074
is a target point cloud data set.
It should be appreciated that statistical filtering of the raw point cloud data set removes outliers.
Understandably, P0The spatial coordinates of (1) are (0, 0), that is, the points whose distances are greater than the threshold value are uniformly set to 0, and then filtered out.
Specifically, since the density of the collected point cloud data (i.e., the original point cloud data set) is not uniform, outliers, i.e., points that deviate from the overall point cloud, may be generated in the data. By calculating the average distance of each point to the nearby point, a Gaussian distribution result is obtained, which is represented by the mean M1 and the variance S2And determining the shape of the Gaussian distribution, setting a standard range, and deleting points with the average distance outside the standard range from the point cloud.
In the embodiment, the target point cloud data set is obtained by removing the outlier of the original point cloud data set in the first mode, so that the outlier is removed, a basis is provided for subsequent data processing, the identification degree of the barrier and the blind guiding precision are improved, and the safety of the blind is guaranteed.
Optionally, as an embodiment of the present invention, the target point cloud data set includes a plurality of target point cloud data, and the process of step S2 includes:
mapping each target point cloud data to an x coordinate axis, a y coordinate axis and a z coordinate axis respectively to obtain an x coordinate axis, a y coordinate axis and a z coordinate axis corresponding to each target point cloud data;
respectively screening the maximum values of the x-axis coordinate, the y-axis coordinate and the z-axis coordinate corresponding to the target point cloud data, and obtaining the maximum x-axis coordinate, the maximum y-axis coordinate and the maximum z-axis coordinate after screening;
respectively screening the minimum value of the x-axis coordinate, the y-axis coordinate and the z-axis coordinate corresponding to the target point cloud data, and obtaining the minimum x-axis coordinate, the minimum y-axis coordinate and the minimum z-axis coordinate after screening;
calculating the minimum partition side length of the maximum x-axis coordinate, the maximum y-axis coordinate, the maximum z-axis coordinate, the minimum x-axis coordinate, the minimum y-axis coordinate and the minimum z-axis coordinate according to a second formula to obtain the minimum partition side length, wherein the second formula is as follows:
Figure BDA0003108404630000081
wherein x ismaxIs the maximum x-axis coordinate, xminIs a minimum x-axis coordinate, ymaxIs the maximum y-axis coordinate, yminIs a minimum y-axis coordinate, zmaxIs the maximum z-axis coordinate, zminThe minimum z-axis coordinate is obtained, K is a proportionality coefficient, rho is a point cloud data density, N is the number of target point cloud data, and L is the minimum dividing side length;
screening all the target point cloud data according to the minimum partition side length, and obtaining a plurality of screened point cloud data after screening;
counting a plurality of screened point cloud data to obtain the number of the screened point cloud data;
calculating the gravity centers of the target point cloud data and the screened point cloud data through a third formula to obtain the gravity centers of the point cloud data, wherein the third formula is as follows:
Figure BDA0003108404630000082
wherein O is the center of gravity of the point cloud data, and NindexFor the number of point cloud data after screening, piIs target point cloud data.
It should be understood that the amount of point cloud data (i.e., the target point cloud data set) after outliers are removed remains large, and not all point cloud data is valid data. The invalid data needs to be filtered to reduce the operation pressure and the storage space, namely, the point cloud data (namely the target point cloud data set) is simplified. And sampling the point cloud data by adopting a voxelization grid method. And calculating a minimum bounding box according to the given point cloud, dividing the point cloud into three-dimensional voxel grids according to the minimum side length, and approximately representing a point set contained in the voxel by using a point closest to the center of gravity of the point set in the voxel so as to achieve a filtering effect.
Specifically, according to the point cloud data with outliers removed, the most point cloud boundaries are respectively calculatedLarge and minimum values: x is the number ofmax,xmin,ymax,ymin,zmax,zminObtaining a minimum bounding box parallel to the coordinate axis; determining the minimum dividing edge length L according to the point cloud density rho; within each grid, the center of gravity of all points (i.e., the point cloud data center of gravity) is calculated, and the point closest to the center of gravity is approximated to all the point sets in the grid.
In the embodiment, the data of the target point cloud data set is simplified to obtain the gravity center of the point cloud data, so that the operation pressure and the storage space are reduced, a foundation is provided for subsequent data processing, the identification degree of the barrier and the blind guiding precision are improved, and the safety of the blind is guaranteed.
Optionally, as an embodiment of the present invention, the process of step S4 includes:
performing neighborhood search on each smoothed point cloud data by using a kd-tree algorithm to obtain a plurality of neighborhood point cloud data corresponding to each smoothed point cloud data, and counting the number of the plurality of neighborhood point cloud data corresponding to each smoothed point cloud data to obtain the number of the neighborhood point cloud data corresponding to each smoothed point cloud data;
calculating the barycenter of the neighborhood point cloud data of a plurality of neighborhood point cloud data corresponding to the smoothed point cloud data respectively through a fourth formula, so as to obtain the barycenter of the neighborhood point cloud data corresponding to the smoothed point cloud data, wherein the fourth formula is as follows:
Figure BDA0003108404630000091
wherein O' is the center of gravity of the neighborhood point cloud data, pjJ is the j neighborhood point cloud data, and k is the number of the neighborhood point cloud data;
respectively calculating a plurality of neighborhood point cloud data corresponding to each smoothed point cloud data and a corresponding neighborhood point cloud data gravity center through a fifth formula to obtain a feature vector matrix corresponding to each smoothed point cloud data, wherein the fifth formula is as follows:
Figure BDA0003108404630000092
wherein M is a feature vector matrix, O' is the center of gravity of the neighborhood point cloud data, pjJ is the j neighborhood point cloud data, and k is the number of the neighborhood point cloud data;
respectively calculating the eigenvector and the eigenvalue of each eigenvector matrix to obtain a plurality of eigenvectors corresponding to each smoothed point cloud data and an eigenvalue corresponding to the eigenvector;
performing dimensionality reduction processing on each characteristic value by using a PCA algorithm to obtain a plurality of dimensionality-reduced characteristic values corresponding to the smoothed point cloud data;
respectively carrying out minimum value screening on a plurality of feature values after dimension reduction corresponding to the point cloud data after smoothing, obtaining a minimum feature value corresponding to the point cloud data after screening, and taking a feature vector corresponding to the minimum feature value as a normal vector corresponding to the point cloud data after smoothing;
respectively calculating the average value of the included angle of each normal vector through a sixth formula to obtain the average value of the included angle corresponding to each smoothed point cloud data, wherein the sixth formula is as follows:
Figure BDA0003108404630000101
wherein the content of the first and second substances,
Figure BDA0003108404630000102
wherein theta is the average value of included angles, k is the number of neighborhood point cloud data, thetaijIs the angle between the normal vector of the ith smoothed point cloud data i and the normal vector of the jth neighborhood point cloud data, niIs the normal vector of the ith smoothed point cloud data, njA normal vector of jth neighborhood point cloud data;
and if the included angle average value is greater than or equal to a preset included angle value and the number of the neighborhood point cloud data is less than or equal to the number of the preset point cloud data, taking the smoothed point cloud data corresponding to the included angle average value as point cloud feature points, thereby obtaining a plurality of point cloud feature points.
It should be understood that the point cloud data (i.e., the smoothed point cloud data) is subjected to feature extraction based on normal vectors.
It should be understood that, because the point cloud data (i.e., the smoothed point cloud data) has no topological structure relationship, the normal and curvature thereof can be used to represent the geometric shape feature of the object surface, which can better reflect the geometric feature information of the point cloud, so that the algorithms such as point cloud registration, three-dimensional reconstruction, etc. adopt the PCA method to solve the normal vector to reflect the geometric feature information of the point cloud. And solving each point in the point cloud by using a neighborhood search based on a kd-tree to obtain a normal vector of each point.
Specifically, p is first determinediSet of neighboring points { p) of points (i.e., the smoothed point cloud data)jJ ═ 1,2, ·, k } (i.e., a plurality of the neighborhood point cloud data), piAnd its neighbor set are denoted as NaHb (p)i) Calculating the mean value of the points, namely the gravity center O (namely the gravity center of the neighborhood point cloud data);
computing p using a total least squares methodiAnd its neighborhood, a local plane P, which can be expressed as
Figure BDA0003108404630000111
Where n represents the normal vector of plane P and also represents point PiD represents the distance of P from the origin of coordinates.
Finding the normal vector of the fitting plane translates to finding the minimum of:
Figure BDA0003108404630000112
the solution to the minimum can be converted to a covariance matrix in the equationThe eigenvector corresponding to the minimum eigenvalue of M is solved by using the PCA method, namely piWherein the matrix M is a symmetric semi-positive definite matrix having non-negative eigenvalues.
Figure BDA0003108404630000113
Calculating the characteristic value lambda of Mi(i ═ 1,2, ·, k) and a feature vector Vj(j ═ 1,2,. cndot., k) and has λkk-1>···>λ1. The normal vector n is v1. Using minimum eigenvalues v1Corresponding feature vector to replace piThe normal vector of (2).
The obtained normal vector is a non-directional normal vector, and the accuracy of the consistency of the normal vectors of the point cloud directly relates to the quality of the reconstruction effect in the process of the poise reconstruction, so that the normal vectors of all the points in the point cloud need to be adjusted.
Taking a certain point p in the point cloudiThe degree of change of the normal vector (i.e., the smoothed point cloud data), i.e., the degree of feature, is defined by the normal vector and its neighborhood point pjThe arithmetic mean of the normal vector angles of { j ═ 1,2, · · k }.
The larger the feature degree of the point is, the larger the fluctuation of the area is, a proper threshold value is selected, and the flatter part in the point cloud is removed: when theta is smaller than the threshold value (i.e. the preset included angle value), and the point piIf the number of adjacent points is greater than a predetermined number (i.e., the predetermined amount of point cloud data), p is addediAnd deleting the feature points to extract the feature points. If a region is approximately planar, then piThe included angle between the normal vector of a point and the normal vector of the adjacent point is very small, if feature extraction is carried out by only using the included angle between the normal vectors, most points in the area are lost, and p is addediSetting appropriate r and searching p by using neighborhood selection method based on radiusiFitting p to neighboring points within a radius of the neighborhoodiLocal plane and calculate piAnd an arithmetic mean value theta when theta is less than a certain angle, and piIf the number of points adjacent is greater than a certain number, p will beiAnd (5) deleting. Therefore, the point set in the flat area can not be completely deleted, and the points of the area with large change of the normal vector included angle can be reserved.
In the above embodiment, the plurality of point cloud feature points are obtained by extracting the features of the plurality of smoothed point cloud data, so that the geometric feature information of the point cloud can be clearly reflected, meanwhile, the point set in the flat area is ensured not to be completely deleted, and the points in the area with large change of the included angle of the normal vector are reserved.
Optionally, as an embodiment of the present invention, in step S5, the process of performing registration processing on the plurality of point cloud feature points to obtain a plurality of registered point cloud data includes:
performing neighborhood search on each point cloud feature point by using a kd-tree algorithm to obtain a plurality of neighborhood feature point cloud data corresponding to the point cloud feature points;
taking a plurality of neighborhood characteristic point cloud data corresponding to each point cloud characteristic point cloud as a neighborhood characteristic point cloud data set, and respectively searching a nearest point pair for each neighborhood characteristic point cloud data set by adopting a point-to-surface distance algorithm to obtain a nearest point pair corresponding to the neighborhood characteristic point cloud data set;
screening the plurality of nearest point pairs according to a preset threshold value, and obtaining a plurality of screened nearest point pairs after screening;
and respectively carrying out translation transformation on each screened nearest point pair by using a rigid body transformation algorithm to obtain a transformed nearest point pair corresponding to each screened nearest point pair, and taking the transformed nearest point pair as point cloud data after registration.
Specifically, a kd-tree is used for carrying out effective space search on point cloud data (namely the point cloud characteristic points) to establish topological information among the point cloud data, so that the whole registration process is accelerated; searching a nearest point pair by adopting the distance between the point and the surface, determining the nearest point pair by calculating the distance between the point and the tangent plane and the relation between the point and the tangent plane, and further completing the searching of the nearest point pair of the source point cloud and the target point cloud; and (3) eliminating the nearest point pairs with larger deviation by adopting a method of setting a threshold according to the Euclidean distance, and reserving the point pairs within the threshold range, so that the error of the nearest point pairs is minimized and is closer to reality, thereby ensuring more accurate registration.
In the embodiment, the registration of the plurality of point cloud feature points is carried out to obtain a plurality of registered point cloud data, the topological information among the point cloud data is effectively searched and established, the error of the nearest point pair can be minimized and is closer to reality, and thus, more accurate registration is ensured.
Optionally, as an embodiment of the present invention, in step S6, performing point cloud compensation processing on the point cloud data set to be processed obtained in the mth time according to the multiple point cloud data sets to be processed obtained in the first m-1 times, and obtaining a final point cloud data set includes:
respectively carrying out polynomial fitting on each point cloud data set to be processed obtained m-1 times in the previous step according to preset fitting times to obtain a point cloud angular velocity corresponding to each point cloud data set to be processed obtained m-1 times in the previous step and a corresponding point cloud velocity;
calculating the average value of all the point cloud angular velocities to obtain the point cloud average angular velocity;
calculating the average value of all the point cloud speeds to obtain the point cloud average speed;
and carrying out interpolation compensation on the point cloud data set to be processed obtained in the mth time according to the point cloud average angular velocity and the point cloud average velocity to obtain a final point cloud data set.
It should be understood that compensation is not used for the initially acquired cloud point map, and is introduced when the angular velocity and velocity data are calculated to be sufficient, by compensating a cloud point map to the laser exposure time of the last active point of the map. The program for compensation function firstly obtains the measured object attitude at the latest measurement time from the registration program part, estimates the angular velocity and speed of the measured object independent of the point cloud registration program, immediately compensates each point every time a point cloud image is obtained, and provides the compensated data to the registration process.
Specifically, an R, T matrix calculated by the previous N times of registration is obtained, specifically:
and acquiring the posture of the previous measured object in the registration process, and outputting R, T current matrixes (namely a plurality of point cloud data sets to be processed obtained in the previous m-1 times). The R matrix is equivalent to 3 rotation degrees of freedom, the T matrix is equivalent to 3 displacement degrees of freedom, and the R matrix is converted into a form of quaternion for the convenience of angle interpolation.
Fitting the N R, T matrices by K-times (i.e., the preset fitting times) polynomial, specifically:
here, K < N is required to be satisfied, and if K is 1, the acquisition end time t of the point cloud image is an independent variable with six degrees of freedom (α, β, γ, x, y, z) of the object to be measured at a certain time as a dependent variable, and the 6 degrees of freedom are linearly estimated, respectively, to obtain 1-order derivatives of the 6 degrees of freedom with respect to time. The 3 translational degrees of freedom can be directly processed, but the 3 rotational degrees of freedom cannot be processed, because the rotational degrees of freedom have the problems of uneven interpolation, universal joint lock and the like, the rotational degrees of freedom need to be converted into a quaternion which is an expression form easy for interpolation operation and then processed.
Finally, ω can be expressed1、ω2、ω3Synthesized as a total angular velocity ω; v. ofx、vy、vzSynthesizing into a total velocity v;
and performing interpolation compensation on all points in the obtained new point cloud picture in sequence through the average angular velocity omega and the velocity v, wherein the interpolation compensation is specifically as follows:
if K is 1, setting NUM points in the current point cloud image, wherein each point has a number id [ n ]](n ═ 1, 2.. NUM). Has 0 < id n]≤NUM。id[n]Is the actual point transmission sequence number. For these NUM points, all but the last point needs to be compensated. The problem is converted into: knowing the pre-compensation three-dimensional coordinate p [ n ] of a point n]And its transmission number id n]Solving the compensated three-dimensional coordinate p0[n]。
And (3) estimating the central position s of the current point cloud by the measured object positions s ', s' and the like (namely the translation matrixes T ', T' and the like) obtained by the registration of the previous point clouds. According toThe instrument constant determines the time difference delT n between each point and the last significant point]. For each point according to p [ n ]]、delT[n]Compensating the estimated center s of the measured object with the calculated angular velocity omega and velocity v of the measured object, and calculating the actual position p0[n]。
Considering that the rotation is not obvious when the person moves, the error caused by translation is mainly compensated.
In the above embodiment, the point cloud angular velocities and the corresponding point cloud velocities corresponding to the point cloud data sets to be processed obtained m-1 times are obtained by respectively fitting the point cloud data sets to be processed obtained m-1 times according to the polynomial of the preset fitting times, the average value of all the point cloud angular velocities is calculated to obtain the point cloud average angular velocity, the average value of all the point cloud velocities is calculated to obtain the point cloud average velocity, and the final point cloud data set is obtained by interpolating and compensating the point cloud data sets to be processed obtained m times through the point cloud average angular velocities and the point cloud average velocity.
Optionally, as an embodiment of the present invention, in step S8, the performing surface reconstruction on the plurality of final point cloud data subsets to obtain a reconstructed surface includes:
calculating scalar functions of normal vectors corresponding to the final point cloud data in the final point cloud data subsets through a seventh formula to obtain the scalar functions corresponding to the final point cloud data, wherein the seventh formula is as follows:
Figure BDA0003108404630000151
wherein the content of the first and second substances,
Figure BDA0003108404630000152
is a normal vector, Δ χ is a scalar function;
and carrying out function discretization processing on all scalar functions by using an octree algorithm to obtain a reconstructed curved surface.
It is understood that surface reconstruction is achieved by poisson reconstruction.
It should be understood that the function discretization means that the whole plane is a continuous function representation, and the whole global plane is divided into a plurality of local planes (discrete plane functions).
Specifically, the problem of curved surface reconstruction is converted into a problem of solving a curved surface indication function, and the curved surface is reconstructed by using an equivalent surface of the function. A surface indication function is defined, which is defined as 1 inside the surface model and 0 outside the surface model. The gradient of the indicator function is equal to the inner normal of the model surface function. The objective of poisson reconstruction is to approximate the gradient field of the indicator function as closely as possible to the vector field determined by the point cloud data normal vector field.
Figure BDA0003108404630000161
The relationship between the indicator function and the gradient field is as follows:
Figure BDA0003108404630000162
where p represents any point in the point cloud data (i.e., the final point cloud data subset).
Introducing a divergence operator to describe the problem as a poisson problem, as follows:
Figure BDA0003108404630000163
the Poisson equation is used for surface reconstruction, i.e. a scalar function chi is solved, and the divergence of the gradient of the scalar function chi is equal to that of a vector field
Figure BDA0003108404630000169
Divergence of (d).
Discretizing the Poisson problem in a function space, and further solving χ. The hidden function χ may be represented by the function F of the pointiThe spanned function space is expressed as follows:
Figure BDA0003108404630000164
the invention utilizes octree to construct local function space Fo. Assuming that the leaf node of the octree consists of eight points p, and the function of each point can be represented by F (p), the indicating function χ of the octree subspace0The following equation is obtained:
Figure BDA0003108404630000165
the hidden function χ of the curved surface can be set from { FoIs expressed by the function space formed by the following equation:
Figure BDA0003108404630000166
in the formula (I), the compound is shown in the specification,
Figure BDA0003108404630000167
representing leaf nodes in an octree.
B (x) is a square wave function, and the formula is as follows:
Figure BDA0003108404630000168
the formula for the basis function F can be expressed as follows:
F(x,y,z)=(B(x))*nB(y)*nB(z))*n
space description function FoThe synthesis can be performed by translation and scaling of the basis function F, as follows:
Figure BDA0003108404630000171
in the formula, o.c represents the center of the bounding box corresponding to the node o; o.w indicates the width of the bounding box corresponding to node o.
Vector field
Figure BDA0003108404630000172
Can be defined as:
Figure BDA0003108404630000173
in the formula voRepresenting the normal vector of the data in octree node o.
The definition matrix L represents the laplacian Δ χ of the indicator function in the function space. Assuming that the number of leaf nodes is m, a matrix L of m x m order is definedo,o′. Matrix Lo,o′Any one of as FoLaplacian of and function FoThe dot product of (a) is as shown in the formula:
Figure BDA0003108404630000174
setting vector fields of octree node data
Figure BDA0003108404630000175
Divergence of boA vector of m dimensions, as follows:
Figure BDA0003108404630000176
the solution of the poisson equation can be converted into a linear equation solution, such as the formula:
Lφ=bo
the surface of the Poisson reconstruction is assumed to be
Figure BDA0003108404630000177
Using the mean of the indicator function of the sampling points as the standard value r of the isosurface:
Figure BDA0003108404630000178
thus, a surface is reconstructed
Figure BDA00031084046300001710
Expressed as:
Figure BDA0003108404630000179
and finding out voxels intersected with the isosurface by an octree method, finding out intersecting surfaces intersected with the voxels, and connecting the intersecting surfaces to construct a curved surface so as to realize three-dimensional reconstruction.
In the embodiment, scalar functions of normal vectors corresponding to the final point cloud data in the final point cloud data subsets are calculated through the seventh formula to obtain scalar functions corresponding to the final point cloud data, and the octree algorithm is used for performing function discretization on all the scalar functions to obtain the reconstructed curved surface, so that the gradient field of the indicating function can approach the vector field determined by the normal vector field of the point cloud data as much as possible, the identification degree of the obstacle is improved, and the safety of the blind is guaranteed.
Optionally, as an embodiment of the present invention, after a process of generating a voice prompt instruction and performing voice prompt according to the voice prompt instruction when the distance between any one of the obstacles is smaller than a preset distance value, the method further includes a step of determining a fall, where the process of determining a fall includes:
obtaining an angular velocity to be judged from a preset acceleration sensor, if the angular velocity to be judged is greater than a preset tumbling threshold value, generating a tumbling instruction, and carrying out voice prompt;
and obtaining positioning information from preset positioning equipment according to the tumbling instruction, and sending the positioning information to a specified terminal.
It should be understood that the designated terminal may be a mobile phone of the blind relative.
Specifically, an MEMS acceleration sensor is used for attitude detection, the measurement principle of the MEMS acceleration sensor is Newton's second law, when acceleration a acts on the sensor, inertial force F is ma, a variable r is generated through a piezoresistor on an elastic beam, and a Wheatstone bridge consisting of the piezoresistors outputs an electric signal V which is in direct proportion to r.
When the angle sensor is initially in a horizontal position, a certain analog voltage is output. The sensor accurately detects an angle range of 0-180 degrees, when the angle sensor and the horizontal plane form an angle of more than 170 degrees (the body inclination is less than 10 degrees or more than 170 degrees and can be determined as falling), a larger analog voltage is output, whether the blind person meets an emergency condition or not can be judged in advance, when the blind person is judged to have accidents such as falling, the position of the blind person is obtained quickly, and an alarm is sent to the related relative mobile phone in a short message mode at the first time.
In the above embodiment, angular velocity to be judged is obtained from the preset acceleration sensor, if the angular velocity to be judged is greater than the preset falling threshold value, a falling instruction is generated, voice prompt is performed, positioning information is obtained from the preset positioning device through the falling instruction, and the positioning information is sent to the appointed terminal, so that the blind can timely know the blind at the first time when the blind falls, and the life safety of the blind is greatly guaranteed.
Optionally, as an embodiment of the present invention, the method performs real-time navigation and positioning through Beidou two-way communication and monitoring, and timely processes when an unexpected event is detected, specifically:
firstly, carrying out miniaturization design on the Beidou short message communication all-in-one machine; the Beidou short message communication integrated machine is communicated with an external device for transmission, so that data interaction is realized; then, storing the position data on the concentrator by using a Flash storage chip and sending the position data; and finally, transmitting the information to the user side.
Preferably, a 32-bit microcontroller based on an ARM Cortex-M3 processor core is selected as a main control chip of the Beidou short message communication all-in-one machine. And selecting a TD3201 type Beidou RDSS single-mode module to realize short message communication. The serial port circuit adopts a 3.3V LVTTL level interface to realize the two-way communication between the Beidou RDSS module and the STM32F103VBT6 single chip microcomputer.
It should be understood that, carry out communication transmission through big dipper short message communication all-in-one and peripheral hardware, realize data interaction, specifically do:
and the differential level is adopted for communication transmission, so that small signals are easy to identify, and external electromagnetic interference is immune.
Specifically, the method includes that a Flash memory chip is used for storing and sending position data on a concentrator, and specifically includes the following steps:
and the position information of the blind guiding system is collected by a collector of the user side and is uploaded to the concentrator. The concentrator gathers a plurality of position information, processes and analyzes the position information, the position information is stored through a Flash storage chip, and related acquired data are transmitted to a 5G cloud platform for map visual supervision.
Fig. 2 is a block diagram of a blind-aided vision processing apparatus according to an embodiment of the present invention.
Alternatively, as another embodiment of the present invention, as shown in fig. 2, a blind-person-aided vision processing apparatus includes:
the system comprises a data set obtaining module, a data acquisition module and a data acquisition module, wherein the data set obtaining module is used for obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar and collecting all the original point cloud data to obtain an original point cloud data set;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, and the point cloud picture comprises a plurality of barrier distances;
and the voice prompt module is used for generating a voice prompt instruction when the distance between any one of the obstacles is smaller than a preset distance value, and carrying out voice prompt according to the voice prompt instruction.
Alternatively, another embodiment of the present invention provides a blind-person aided vision processing apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the blind-person aided vision processing method as described above is implemented. The device may be a computer or the like.
Alternatively, another embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the blind-aided vision processing method as described above.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A blind auxiliary vision processing method is characterized by comprising the following steps:
obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar, and collecting all the original point cloud data to obtain an original point cloud data set;
performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, wherein the point cloud picture comprises a plurality of barrier distances;
and when the distance between any one of the obstacles is smaller than a preset distance value, generating a voice prompt instruction, and carrying out voice prompt according to the voice prompt instruction.
2. The blind-aided vision processing method of claim 1, wherein the process of reconstructing the original point cloud data set in three dimensions to obtain a point cloud image comprises:
s1: removing outliers from the original point cloud data set to obtain a target point cloud data set;
s2: carrying out data simplification processing on the target point cloud data set to obtain a point cloud data center of gravity;
s3: smoothing the gravity centers of the point cloud data to obtain a plurality of smoothed point cloud data;
s4: performing feature extraction on the plurality of smoothed point cloud data to obtain a plurality of point cloud feature points;
s5: registering the plurality of point cloud feature points to obtain a plurality of registered point cloud data, and collecting all the registered point cloud data to obtain a point cloud data set to be processed;
s6: if the preset iteration number m is not reached, returning to the step S1 until the preset iteration number m is reached, thereby obtaining a plurality of point cloud data sets to be processed, and performing point cloud compensation processing on the point cloud data set to be processed obtained in the mth time according to the plurality of point cloud data sets to be processed obtained in the previous m-1 times to obtain a final point cloud data set;
s7: performing point cloud segmentation on the final point cloud data set by using a Euclidean algorithm to obtain a plurality of final point cloud data subsets;
s8: and performing surface reconstruction on the plurality of final point cloud data subsets to obtain a reconstructed curved surface, and taking the reconstructed curved surface as a point cloud picture.
3. The blind-aided vision processing method as claimed in claim 2, wherein the process of step S1 includes:
removing outliers from the original point cloud data set by a first formula to obtain a target point cloud data set, wherein the first formula is as follows:
Figure FDA0003108404620000021
wherein the content of the first and second substances,
Figure FDA0003108404620000022
wherein the content of the first and second substances,
Figure FDA0003108404620000023
wherein omegacFor the original point cloud data set PcIn the neighborhood of (1), PcFor the original point cloud dataset, k1 is the neighborhood region ΩcNumber of interior point cloud data, P0Has a spatial coordinate of (0, 0), PiFor the original point cloud data set PcM1 as mean value, S2Is the variance of the received signal and the received signal,
Figure FDA0003108404620000024
is a target point cloud data set.
4. The blind-aided vision processing method as claimed in claim 2, wherein said target point cloud data set includes a plurality of target point cloud data, and the process of said step S2 includes:
mapping each target point cloud data to an x coordinate axis, a y coordinate axis and a z coordinate axis respectively to obtain an x coordinate axis, a y coordinate axis and a z coordinate axis corresponding to each target point cloud data;
respectively screening the maximum values of the x-axis coordinate, the y-axis coordinate and the z-axis coordinate corresponding to the target point cloud data, and obtaining the maximum x-axis coordinate, the maximum y-axis coordinate and the maximum z-axis coordinate after screening;
respectively screening the minimum value of the x-axis coordinate, the y-axis coordinate and the z-axis coordinate corresponding to the target point cloud data, and obtaining the minimum x-axis coordinate, the minimum y-axis coordinate and the minimum z-axis coordinate after screening;
calculating the minimum partition side length of the maximum x-axis coordinate, the maximum y-axis coordinate, the maximum z-axis coordinate, the minimum x-axis coordinate, the minimum y-axis coordinate and the minimum z-axis coordinate according to a second formula to obtain the minimum partition side length, wherein the second formula is as follows:
Figure FDA0003108404620000025
wherein x ismaxIs the maximum x-axis coordinate, xminIs a minimum x-axis coordinate, ymaxIs the maximum y-axis coordinate, yminIs a minimum y-axis coordinate, zmaxIs the maximum z-axis coordinate, zminThe minimum z-axis coordinate is obtained, K is a proportionality coefficient, rho is a point cloud data density, N is the number of target point cloud data, and L is the minimum dividing side length;
screening all the target point cloud data according to the minimum partition side length, and obtaining a plurality of screened point cloud data after screening;
counting a plurality of screened point cloud data to obtain the number of the screened point cloud data;
calculating the gravity centers of the target point cloud data and the screened point cloud data through a third formula to obtain the gravity centers of the point cloud data, wherein the third formula is as follows:
Figure FDA0003108404620000031
wherein O is the center of gravity of the point cloud data, and NindexFor the number of point cloud data after screening, piIs target point cloud data.
5. The blind-aided vision processing method as claimed in claim 2, wherein the process of step S4 includes:
performing neighborhood search on each smoothed point cloud data by using a kd-tree algorithm to obtain a plurality of neighborhood point cloud data corresponding to each smoothed point cloud data, and counting the number of the plurality of neighborhood point cloud data corresponding to each smoothed point cloud data to obtain the number of the neighborhood point cloud data corresponding to each smoothed point cloud data;
calculating the barycenter of the neighborhood point cloud data of a plurality of neighborhood point cloud data corresponding to the smoothed point cloud data respectively through a fourth formula, so as to obtain the barycenter of the neighborhood point cloud data corresponding to the smoothed point cloud data, wherein the fourth formula is as follows:
Figure FDA0003108404620000032
wherein O' is the center of gravity of the neighborhood point cloud data, pjJ is the j neighborhood point cloud data, and k is the number of the neighborhood point cloud data;
respectively calculating a plurality of neighborhood point cloud data corresponding to each smoothed point cloud data and a corresponding neighborhood point cloud data gravity center through a fifth formula to obtain a feature vector matrix corresponding to each smoothed point cloud data, wherein the fifth formula is as follows:
Figure FDA0003108404620000041
wherein M is a feature vector matrix, O' is the center of gravity of the neighborhood point cloud data, pjJ is the j neighborhood point cloud data, and k is the number of the neighborhood point cloud data;
respectively calculating the eigenvector and the eigenvalue of each eigenvector matrix to obtain a plurality of eigenvectors corresponding to each smoothed point cloud data and an eigenvalue corresponding to the eigenvector;
performing dimensionality reduction processing on each characteristic value by using a PCA algorithm to obtain a plurality of dimensionality-reduced characteristic values corresponding to the smoothed point cloud data;
respectively carrying out minimum value screening on a plurality of feature values after dimension reduction corresponding to the point cloud data after smoothing, obtaining a minimum feature value corresponding to the point cloud data after screening, and taking a feature vector corresponding to the minimum feature value as a normal vector corresponding to the point cloud data after smoothing;
respectively calculating the average value of the included angle of each normal vector through a sixth formula to obtain the average value of the included angle corresponding to each smoothed point cloud data, wherein the sixth formula is as follows:
Figure FDA0003108404620000042
wherein the content of the first and second substances,
Figure FDA0003108404620000043
wherein theta is the average value of included angles, k is the number of neighborhood point cloud data, thetaijIs the angle between the normal vector of the ith smoothed point cloud data and the normal vector of the jth neighborhood point cloud data, niIs the normal vector of the ith smoothed point cloud data, njA normal vector of jth neighborhood point cloud data;
and if the included angle average value is greater than or equal to a preset included angle value and the number of the neighborhood point cloud data is less than or equal to the number of the preset point cloud data, taking the smoothed point cloud data corresponding to the included angle average value as point cloud feature points, thereby obtaining a plurality of point cloud feature points.
6. The blind-aided vision processing method according to claim 5, wherein in step S5, the process of registering the plurality of point cloud feature points to obtain a plurality of registered point cloud data includes:
performing neighborhood search on each point cloud feature point by using a kd-tree algorithm to obtain a plurality of neighborhood feature point cloud data corresponding to the point cloud feature points;
taking a plurality of neighborhood characteristic point cloud data corresponding to each point cloud characteristic point cloud as a neighborhood characteristic point cloud data set, and respectively searching a nearest point pair for each neighborhood characteristic point cloud data set by adopting a point-to-surface distance algorithm to obtain a nearest point pair corresponding to the neighborhood characteristic point cloud data set;
screening the plurality of nearest point pairs according to a preset threshold value, and obtaining a plurality of screened nearest point pairs after screening;
and respectively carrying out translation transformation on each screened nearest point pair by using a rigid body transformation algorithm to obtain a transformed nearest point pair corresponding to each screened nearest point pair, and taking the transformed nearest point pair as point cloud data after registration.
7. The blind-aided vision processing method of claim 6, wherein in step S6, the point cloud compensation processing is performed on the point cloud data set to be processed obtained in the mth time according to the plurality of point cloud data sets to be processed obtained in the previous m-1 times, and the process of obtaining the final point cloud data set includes:
respectively carrying out polynomial fitting on each point cloud data set to be processed obtained m-1 times in the previous step according to preset fitting times to obtain a point cloud angular velocity corresponding to each point cloud data set to be processed obtained m-1 times in the previous step and a corresponding point cloud velocity;
calculating the average value of all the point cloud angular velocities to obtain the point cloud average angular velocity;
calculating the average value of all the point cloud speeds to obtain the point cloud average speed;
and carrying out interpolation compensation on the point cloud data set to be processed obtained in the mth time according to the point cloud average angular velocity and the point cloud average velocity to obtain a final point cloud data set.
8. The blind-aided vision processing method of claim 7, wherein in step S8, the surface reconstruction of the final point cloud data subsets to obtain a reconstructed surface includes:
calculating scalar functions of normal vectors corresponding to the final point cloud data in the final point cloud data subsets through a seventh formula to obtain the scalar functions corresponding to the final point cloud data, wherein the seventh formula is as follows:
Figure FDA0003108404620000061
wherein the content of the first and second substances,
Figure FDA0003108404620000062
is a normal vector, Δ χ is a scalar function;
and carrying out function discretization processing on all scalar functions by using an octree algorithm to obtain a reconstructed curved surface.
9. The blind-aided vision processing method according to claim 1, wherein after a process of generating a voice prompt instruction and performing voice prompt according to the voice prompt instruction when the distance between any one of the obstacles is smaller than a preset distance value, the method further comprises a step of determining a fall, and the process of determining a fall comprises:
obtaining an angular velocity to be judged from a preset acceleration sensor, if the angular velocity to be judged is greater than a preset tumbling threshold value, generating a tumbling instruction, and carrying out voice prompt;
and obtaining positioning information from preset positioning equipment according to the tumbling instruction, and sending the positioning information to a specified terminal.
10. A blind-aided vision processing apparatus, comprising:
the system comprises a data set obtaining module, a data acquisition module and a data acquisition module, wherein the data set obtaining module is used for obtaining original point cloud data of a plurality of objects to be detected from a preset laser radar and collecting all the original point cloud data to obtain an original point cloud data set;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the original point cloud data set to obtain a point cloud picture, and the point cloud picture comprises a plurality of barrier distances;
and the voice prompt module is used for generating a voice prompt instruction when the distance between any one of the obstacles is smaller than a preset distance value, and carrying out voice prompt according to the voice prompt instruction.
CN202110642244.XA 2021-06-09 2021-06-09 Blind person assisted vision processing method and device Active CN113409446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642244.XA CN113409446B (en) 2021-06-09 2021-06-09 Blind person assisted vision processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642244.XA CN113409446B (en) 2021-06-09 2021-06-09 Blind person assisted vision processing method and device

Publications (2)

Publication Number Publication Date
CN113409446A true CN113409446A (en) 2021-09-17
CN113409446B CN113409446B (en) 2022-07-29

Family

ID=77683279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642244.XA Active CN113409446B (en) 2021-06-09 2021-06-09 Blind person assisted vision processing method and device

Country Status (1)

Country Link
CN (1) CN113409446B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993060A (en) * 2019-03-01 2019-07-09 长安大学 The vehicle omnidirectional obstacle detection method of depth camera
US20200209871A1 (en) * 2017-09-12 2020-07-02 Huawei Technologies Co., Ltd. Method and Apparatus for Analyzing Driving Risk and Sending Risk Data
CN112402197A (en) * 2020-11-19 2021-02-26 武汉工程大学 Intelligent obstacle detection method and device based on mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200209871A1 (en) * 2017-09-12 2020-07-02 Huawei Technologies Co., Ltd. Method and Apparatus for Analyzing Driving Risk and Sending Risk Data
CN109993060A (en) * 2019-03-01 2019-07-09 长安大学 The vehicle omnidirectional obstacle detection method of depth camera
CN112402197A (en) * 2020-11-19 2021-02-26 武汉工程大学 Intelligent obstacle detection method and device based on mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倪志鹏: "基于IMU和视觉的盲人防碰撞***算法设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》, no. 3, 15 March 2021 (2021-03-15), pages 138 - 325 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917452A (en) * 2021-09-30 2022-01-11 北京理工大学 Blind road detection device and method combining vision and radar
CN113917452B (en) * 2021-09-30 2022-05-24 北京理工大学 Blind road detection device and method combining vision and radar

Also Published As

Publication number Publication date
CN113409446B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN109059906B (en) Vehicle positioning method and device, electronic equipment and storage medium
JP5759161B2 (en) Object recognition device, object recognition method, learning device, learning method, program, and information processing system
JP2019215853A (en) Method for positioning, device for positioning, device, and computer readable storage medium
US20160078303A1 (en) Unified framework for precise vision-aided navigation
CN111488812B (en) Obstacle position recognition method and device, computer equipment and storage medium
CN110663060B (en) Method, device, system and vehicle/robot for representing environmental elements
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN112001958A (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
WO2007069721A1 (en) Three-dimensional shape data storing/displaying method and device, and three-dimensional shape measuring method and device
EP2757529B1 (en) Systems and methods for 3D data based navigation using descriptor vectors
KR20190070514A (en) Apparatus for Building Grid Map and Method there of
CN107817802B (en) Construction method and device of hybrid double-layer map
CN114140761A (en) Point cloud registration method and device, computer equipment and storage medium
JP2024056032A (en) Data structure, storage medium, and storage device
CN113409446B (en) Blind person assisted vision processing method and device
CN108369739A (en) Article detection device and object detecting method
CN113436313B (en) Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
EP2757526B1 (en) Systems and methods for 3D data based navigation using a watershed method
CN114384486A (en) Data processing method and device
JP2008242833A (en) Device and program for reconfiguring surface data of three-dimensional human face
Lee et al. An incremental nonparametric Bayesian clustering-based traversable region detection method
CN111765883A (en) Monte Carlo positioning method and equipment for robot and storage medium
CN116704029A (en) Dense object semantic map construction method and device, storage medium and electronic equipment
KR102130687B1 (en) System for information fusion among multiple sensor platforms
CN113837995A (en) Method and device for measuring circumference of human body limb, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant