CN111238470B - Intelligent glasses road planning method, medium and equipment under artificial intelligent big data - Google Patents

Intelligent glasses road planning method, medium and equipment under artificial intelligent big data Download PDF

Info

Publication number
CN111238470B
CN111238470B CN202010023509.3A CN202010023509A CN111238470B CN 111238470 B CN111238470 B CN 111238470B CN 202010023509 A CN202010023509 A CN 202010023509A CN 111238470 B CN111238470 B CN 111238470B
Authority
CN
China
Prior art keywords
intelligent glasses
glasses
intelligent
matrix
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010023509.3A
Other languages
Chinese (zh)
Other versions
CN111238470A (en
Inventor
石帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyun Guangdong Information Technology Co ltd
Original Assignee
Zhongyun Guangdong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyun Guangdong Information Technology Co ltd filed Critical Zhongyun Guangdong Information Technology Co ltd
Priority to CN202010023509.3A priority Critical patent/CN111238470B/en
Publication of CN111238470A publication Critical patent/CN111238470A/en
Application granted granted Critical
Publication of CN111238470B publication Critical patent/CN111238470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of intelligent glasses algorithms, and particularly relates to a road planning method, medium and equipment for intelligent glasses under artificial intelligent big data based on a particlized fractional order. The invention comprises the following steps: (1) The image pickup system of the intelligent glasses collects images, performs denoising treatment on the collected images, and then performs optimal path planning through path edge points; (2) The intelligent glasses perform travel operation to guide the intelligent glasses user to travel in the planned path channel; (3) And carrying out gesture monitoring on the intelligent glasses in the journey. The invention expands research by using a fractional order intelligent glasses control strategy to realize the design and stability analysis of an intelligent glasses control system, optimizes parameters in a control law on the basis of the control law, and omits the problem of manually selecting the parameters. Therefore, the difficulty of road planning of the intelligent glasses group is simplified, and stable use of the intelligent glasses is realized.

Description

Intelligent glasses road planning method, medium and equipment under artificial intelligent big data
Technical Field
The invention belongs to the technical field of intelligent glasses algorithms, and particularly relates to a road planning method, medium and equipment for intelligent glasses under artificial intelligent big data based on a particlized fractional order.
Background
The intelligent glasses are one of the wearable intelligent devices which are proposed in recent years and are well-seen, and the intelligent glasses can liberate both hands and realize various life functions convenient for people. After the scientific technology is proposed, it is more important to apply it to reality or become a result. As a visual field representative wearable electronic device, intelligent glasses have great application market and potential in the field of artificial intelligence research. However, at present, no mature product is applied to the related technology in the artificial intelligence field in China, and no mature guiding article exists at the same time.
In daily life, a user can position the glasses through a mobile phone application, and can call the mobile phone through buttons on the glasses, so that the mobile phone is found. In summary, the design and research of the intelligent glasses meet the needs of people in actual life, and have a solid theoretical basis, so that the intelligent glasses have great research significance. Smart glasses have shown great application value and development potential in various fields. Along with the diversification of man-machine interaction technology, face recognition is an important embodiment of man-machine interaction, and has been widely applied to access control systems, monitoring systems and identity verification. The method is applied to wearable electronic equipment such as intelligent glasses and the like, so that a user can use the intelligent equipment more conveniently, but the hardware performance of the intelligent equipment is lower, and the traditional face recognition algorithm cannot be realized quickly or cannot realize preset functions at all when the hardware performance is limited. Therefore, it is important to research a path planning algorithm with low network complexity and high processing speed. The invention is suitable for the path recognition technology of the intelligent glasses, and can meet the requirements of various application occasions.
Disclosure of Invention
The invention aims to provide a road planning method for intelligent glasses under artificial intelligent big data, which has higher control precision and stronger obstacle avoidance performance.
The invention also aims to provide a road planning medium for the intelligent glasses under the artificial intelligent big data.
The invention also aims to provide road planning equipment for the intelligent glasses under the artificial intelligent big data.
The purpose of the invention is realized in the following way:
the road planning method of the intelligent glasses under the artificial intelligent big data comprises a glasses assembly and a frame, wherein the glasses assembly comprises two lenses, and the frame comprises a glasses frame and two intelligent glasses legs respectively connected to two ends of the glasses frame; the intelligent glasses leg comprises a glasses leg body provided with a containing cavity, a power module in the containing cavity, a camera system, a distance sensor, a height sensor, a processor, a voice recognition module, a voice playing module and a wireless communication module, wherein the camera system, the distance sensor, the height sensor, the processor, the voice recognition module, the voice playing module and the wireless communication module are electrically connected with the power module and are arranged in any containing cavity; the intelligent glasses group path planning and travel control method comprises the following steps that a camera shooting system, a distance sensor, a height sensor, a voice recognition module and a voice playing module are connected to a processor through signals, and can be connected with artificial intelligent interaction equipment through wireless communication modules and/or cloud control system information, a power module is a rechargeable battery, the voice recognition module is a receiver, the voice playing module is a microphone, and the intelligent glasses group path planning and travel control are carried out by an intelligent glasses cloud control system and a control module on intelligent glasses, and the planning method comprises the following steps:
(1) The image pickup system of the intelligent glasses collects images, performs denoising treatment on the collected images, and then performs optimal path planning through path edge points;
(2) The intelligent glasses perform travel operation to guide the intelligent glasses user to travel in the planned path channel;
(3) Performing gesture monitoring on the intelligent glasses in the journey;
(4) Repeating steps (2) - (3) until the smart glasses user stops using;
the step (1) comprises the following steps:
(1.1) image acquisition denoising:
the intelligent glasses acquire three-dimensional images of the environment where the path is located through the camera system, and Gaussian filtering processing is carried out on the acquired images;
(1.2) edge detection of a road in an image:
sobel operator calculation is carried out on the gray values of adjacent points around each pixel point in the image, and a threshold value tau is selected by the brightness acquired by the acquired image when
Figure BDA0002361629800000021
f x =(f(a-1,b-1)+2f(a-1,b)+f(a-1,b+1))-(f(a+1,b+1)+2f(a-1,b)+f(a+1,b+1)),
f y =(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1))-(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1)),
When So (a, b) > tau, the points (a, b) in the image are edge points, a, b are coordinates of the edge points, and f is a gray value;
(1.3) path planning according to edge points:
the cloud control system of the intelligent glasses collects edge point information, a path channel is constructed through the edge points, and an included angle between the imaging system of the intelligent glasses and the distance sensor and the horizontal direction is theta a The data returned by the distance sensor is l a The measurement value of the height sensor of the intelligent glasses is z', and the coordinate under the horizontal reference system is D n0 =(x a ,y b ,z′)=(l a ×cosθ a ,l a ×sinθ a ,z′);
Determining a planned path according to the iteration evaluation value of the nodes in the path channel, wherein the distance between two adjacent nodes is l n0 ,l n0 =D n0 -D n0-1 N0 is the index of the current node, the starting point I 0 To the current node I n0 The cost function of (2) is:
Figure BDA0002361629800000031
and scanning two neighbor nodes with maximum G (n 0) near each node to connect, so as to form a planned path channel.
The step (2) comprises:
(2.1) initializing an intelligent glasses group with the number of N, wherein the intelligent glasses group comprises the random speed of the area of the intelligent glasses and the position of the area of the intelligent glasses, and determining the gesture matrix, the angular rate of the gesture matrix and the gesture angle of each intelligent glasses;
(2.2) evaluating the fitness value of the smart glasses in each group;
(2.3) evaluating the best position in the current search space of the smart glasses in each group;
(2.4) updating the inertial weight of the intelligent glasses, and updating the speed and the position of each intelligent glasses;
(2.5) re-executing the step (2.2) under the working state of the intelligent glasses, and ending the method when the working of the intelligent glasses is finished;
the determining step of the gesture matrix of the intelligent glasses comprises the following steps:
(2.1.1) assuming the Earth coordinate System as the earth System, the geographic coordinate SystemThe carrier coordinate system is carrier system, the navigation coordinate system is pilot system, and the axes of the coordinate systems are respectively X in turn g 、Y g 、Z g ;X c 、Y c 、Z c ;X p 、Y p 、Z p
(2.1.2) calculating a directional cosine matrix of a geographic coordinate system of the intelligent glasses;
Figure BDA0002361629800000032
wherein the longitude of the intelligent glasses is alpha e And a latitude of delta e ;α e The range of the values of (a) is (-180 DEG, 180 DEG); delta e The range of the values of (a) is (-90 DEG, 90 DEG);
(2.1.3) calculating a direction cosine matrix of a carrier coordinate system of the intelligent glasses;
Figure BDA0002361629800000033
wherein gamma is c For roll angle of the carrier coordinate system relative to the geographical coordinate system, i.e. X c Relative to X g Is included in the plane of the first part;
wherein θ is c For pitch angle of the carrier coordinate system relative to the geographical coordinate system, i.e. Y c With respect to Y g Is included in the plane of the first part;
wherein the method comprises the steps of
Figure BDA0002361629800000035
For heading angle of the carrier coordinate system relative to the geographical coordinate system, i.e. Z c Relative to Z g Is included in the plane of the first part;
(2.1.4) calculating an attitude matrix of a navigation coordinate system of the intelligent glasses;
Figure BDA0002361629800000034
(2.1.5) calculating the attitude matrix angular rate of the navigation coordinate system of the intelligent glasses;
Figure BDA0002361629800000041
ω e omega is the projection of the earth angular velocity in the navigation coordinate system a Is a measured value of the intelligent glasses gyroscope.
The adaptation value of the intelligent glasses in each group is evaluated as follows:
Figure BDA0002361629800000042
f j (x) Representing a branching function on a j-th branch on a target path of the intelligent glasses; alpha j The weight corresponding to the jth branch is represented, m is the total number of branches of the target path, and x represents the estimated intelligent glasses label;
f j (x)=(β j ∩γ j )(β j ∪γ j ) -2
β j a set of smart glasses for executing a target path through a j-th branch; gamma ray j The intelligent glasses set is used for inputting the j-th branch which is passed by the tested program after being executed.
The step of evaluating the optimal position in the current search space of the intelligent glasses in each group comprises the following steps of;
(2.3.1) matching the fitness value of each smart glasses with the estimated current local best known position pbest of the smart glasses i,n Comparing, if the adaptation value of the intelligent glasses is small, replacing the adaptation value of the intelligent glasses with the current local best known position pbest i,n
(2.3.2) comparing the current locally-best known position pbest of each smart glasses with the best position gbest in the current search space of the smart glasses, and if the current locally-best known position pbest of the smart glasses is small, replacing the current locally-best known position pbest of the smart glasses with the best position gbest in the current search space.
The estimated current local best known position of the intelligent glassesPut the pbest i,n Comprising:
n intelligent glasses, wherein each intelligent glasses is positioned in a search space of S dimension, S is less than or equal to 3, and the position of the ith intelligent glasses in the nth iteration process of the method is as follows:
Figure BDA0002361629800000043
the corresponding speed of the intelligent glasses is as follows:
Figure BDA0002361629800000044
Figure BDA0002361629800000045
the updating of the speed and the position of each intelligent glasses comprises;
(2.4.1) update G n The inertial weight is:
Figure BDA0002361629800000046
G nmax g is the maximum value of the inertial weight of the intelligent glasses nmin As a minimum value of the inertial weight,
N max is the maximum value of the iteration times;
(2.4.2) update the speed of the smart glasses:
v i,n =p s [v i,n-1 +G n ·(pbest-x i,n-1 )+G n ·(gbest-x i,n-1 )];
p s in order for the contraction factor to be a factor,
Figure BDA0002361629800000051
k is the upper limit speed of network searching;
(2.4.3) updating the position of the smart glasses;
x i,n =x i,n-1 +v i,n
said alpha j Indicating the weight corresponding to the jth branch,
Figure BDA0002361629800000052
γ n representing a value of the number of path consecutive matches greater than 1 in the iterative process.
The step (3) comprises the following steps:
(3.1) extracting point cloud data M of the intelligent glasses environment reference target at the previous time point through a cloud data system W At point cloud data M W Set m of medium fetch points i ∈M W The method comprises the steps of carrying out a first treatment on the surface of the Simultaneous extraction of m i Corresponding real set
Figure BDA0002361629800000053
(3.2) extracting point cloud data Q of the intelligent glasses environment reference target at the current time point by the camera system W At point cloud data Q W Set q of medium fetch points i ∈Q W Let |q i -m i Obtaining a minimum value;
(3.3) calculating the rotation matrix R in the image capturing System W And a translation matrix T W
(3.4) by rotating the matrix R W And a translation matrix T W The obtained real set
Figure BDA0002361629800000054
Set m' after changing position and posture i
Figure BDA0002361629800000055
(3.5) calculating m' i And q i Average distance of (2)
Figure BDA0002361629800000056
(3.6) if the average distance
Figure BDA0002361629800000057
Less than or equal to the pose early warning threshold value +.>
Figure BDA0002361629800000059
The target pose is normal, otherwise the communication target is adjusted.
The rotation matrix R in the calculation camera system W And a translation matrix T W Comprising the following steps:
(3.3.1) calculating centroids of the two sets:
Figure BDA0002361629800000058
Figure BDA0002361629800000061
for set m i The number of points->
Figure BDA0002361629800000062
For set q i The number of points;
(3.3.2) calculating covariance matrices of the two rendezvous points;
Figure BDA0002361629800000063
t is a transposed symbol;
(3.3.3) calculating an antisymmetric matrix;
Figure BDA0002361629800000064
(3.3.4) construction of a 4×4 matrix
Figure BDA0002361629800000065
Figure BDA0002361629800000066
I 3×3 Is a 3 x 3 matrix of units,
Figure BDA0002361629800000067
is the trace of matrix +.>
Figure BDA0002361629800000068
Is a column vector of components of an antisymmetric matrix, a->
Figure BDA0002361629800000069
Feature vector r= [ r ] corresponding to the maximum feature value of (a) 0 ,r 1 ,r 2 ,r 3 ];
(3.3.5) rotating matrix R W The method comprises the following steps:
Figure BDA00023616298000000610
(3.3.6) translation matrix T W The method comprises the following steps:
Figure BDA00023616298000000611
the medium adopts the road planning method of the intelligent glasses under the artificial intelligent big data to carry out road planning.
The road planning equipment of the intelligent glasses under the artificial intelligent big data adopts the road planning medium of the intelligent glasses under the artificial intelligent big data to carry out intelligent glasses road planning.
The invention has the beneficial effects that:
the invention expands research by using a fractional order intelligent glasses control strategy to realize the design and stability analysis of an intelligent glasses control system, optimizes parameters in a control law on the basis of the control law, and omits the problem of manually selecting the parameters. Therefore, the difficulty of road planning of the intelligent glasses group is simplified, and stable use of the intelligent glasses is realized.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of a method of stroke control according to the present invention.
Fig. 3 is a schematic diagram of a preset obstacle image.
Fig. 4 is a feature extraction schematic.
Fig. 5 is an edge extraction schematic.
Fig. 6 is a schematic diagram of a path channel.
Fig. 7 is the smart glasses stroke test fig. 1.
Fig. 8 is the smart glasses stroke test fig. 2.
Fig. 9 is a schematic diagram of a simulation of the gesture monitoring method.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention relates to a road planning method of intelligent glasses under artificial intelligent big data, which comprises a glasses assembly and a frame, wherein the glasses assembly comprises two lenses, and the frame comprises a glasses frame and two intelligent glasses legs respectively connected to two ends of the glasses frame; the intelligent glasses leg comprises a glasses leg body provided with a containing cavity, a power module in the containing cavity, a camera system, a distance sensor, a height sensor, a processor, a voice recognition module, a voice playing module and a wireless communication module, wherein the camera system, the distance sensor, the height sensor, the processor, the voice recognition module, the voice playing module and the wireless communication module are electrically connected with the power module and are arranged in any containing cavity; the camera system, the distance sensor, the height sensor, the voice recognition module and the voice playing module are all in signal connection with the processor and can be in wireless connection with the artificial intelligent interaction device and/or in information connection with the cloud control system through the wireless communication module, the power module is a rechargeable battery, the voice recognition module is a receiver, and the voice playing module is a microphone. The invention carries on route planning and travel control to the intelligent glasses by the intelligent glasses cloud control system and the control module on the intelligent glasses, the signal is collected by the sensor of the intelligent glasses, and is transmitted to the cloud control system and the control module for processing, the cloud control system transmits the processed signal to the control module, finally the control module executes the control method uniformly, the cloud control system in the invention, namely the cloud (control) system which can be used in the market at present, can inquire and use through the network, the invention mainly aims to improve the operation speed, and simultaneously lighten the calculation amount of the control module, the intelligent glasses cloud control system and the control module on the intelligent glasses carry on route planning and travel control to the intelligent glasses group, as shown in figure 1, the method comprises the following steps:
(1) The image pickup system of the intelligent glasses collects images, performs denoising treatment on the collected images, and then performs optimal path planning through path edge points;
(2) The intelligent glasses perform travel operation to guide the intelligent glasses user to travel in the planned path channel;
(3) Performing gesture monitoring on the intelligent glasses in the journey;
(4) Repeating steps (2) - (3) until the smart glasses user stops using;
the step (1) comprises the following steps:
(1.1) image acquisition denoising:
the intelligent glasses acquire three-dimensional images of the environment where the path is located through the camera system, and Gaussian filtering processing is carried out on the acquired images;
(1.2) edge detection of a road in an image:
sobel operator calculation is carried out on the gray values of adjacent points around each pixel point in the image, and a threshold value tau is selected by the brightness acquired by the acquired image when
Figure BDA0002361629800000081
f x =(f(a-1,b-1)+2f(a-1,b)+f(a-1,b+1))-(f(a+1,b+1)+2f(a-1,b)+f(a+1,b+1)),
f y =(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1))-(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1)),
When So (a, b) > tau, the points (a, b) in the image are edge points, a, b are coordinates of the edge points, and f is a gray value;
(1.3) path planning according to edge points:
the cloud control system of the intelligent glasses collects edge point information, a path channel is constructed through the edge points, and an included angle between the imaging system of the intelligent glasses and the distance sensor and the horizontal direction is theta a The data returned by the distance sensor is l a The measurement value of the height sensor of the intelligent glasses is z', and the coordinate under the horizontal reference system is D n0 =(x a ,y b ,z′)=(l a ×cosθ a ,l a ×sinθ a ,z′);
Determining a planned path according to the iteration evaluation value of the nodes in the path channel, wherein the distance between two adjacent nodes is l n0 ,l n0 =D n0 -D n0-1 N0 is the index of the current node, the starting point I 0 To the current node I n0 The cost function of (2) is:
Figure BDA0002361629800000082
and scanning two neighbor nodes with maximum G (n 0) near each node to connect, so as to form a planned path channel.
As shown in fig. 3-6, in the simulation process of forming the path channel according to the present invention, by using the above method, the shape of the obstacle in fig. 3 is fitted through the image, and the feature extraction is performed on the obstacle in fig. 4-5, so as to perform edge extraction, and finally, a layer of the path channel in fig. 6 is formed, the white area is a representation of the path channel, and finally, each layer of channels is overlapped, so that a three-dimensional path channel is formed. The image recognition and planning of the invention is based on local recognition between two targets, and the recognition of the images is the result of image discrimination and matching. The invention distinguishes and matches images, namely, the areas are divided according to the gray value of the acquired images, and the areas with the same characteristics are connected to form a collection of connected areas to form the path channel of the invention. The invention adopts the image analysis method based on cost function to divide the path channel from the background. The static path is subjected to image acquisition, and the cost function is continuously updated according to the segmentation effect by an iterative method, so that the accuracy of image processing is improved to a great extent by adopting an online cost function determination method, and the real-time effect of the invention is improved. Compared with the prior art, the method and the device have the advantages that the path channel is given through the operation of the gray value, and meanwhile, the optimal planning value of the path is given through the cost function method, so that the advancing efficiency of the intelligent glasses is effectively improved.
As shown in fig. 2, the step (2) includes:
(2.1) initializing an intelligent glasses group with the number of N, wherein the intelligent glasses group comprises the random speed of the area of the intelligent glasses and the position of the area of the intelligent glasses, and determining the gesture matrix, the angular rate of the gesture matrix and the gesture angle of each intelligent glasses;
(2.2) evaluating the fitness value of the smart glasses in each group;
(2.3) evaluating the best position in the current search space of the smart glasses in each group;
(2.4) updating the inertial weight of the intelligent glasses, and updating the speed and the position of each intelligent glasses;
(2.5) re-executing the step (2.2) under the working state of the intelligent glasses, and ending the method when the working of the intelligent glasses is finished;
the determining step of the gesture matrix of the intelligent glasses comprises the following steps:
(2.1.1) assuming that the earth coordinate system is the earth system, the geographic coordinate system is the geo system, the carrier coordinate system is the carrier system, the navigation coordinate system is the pilot system, and the axes of the coordinate systems are respectively X in turn g 、Y g 、Z g ;X c 、Y c 、Z c ;X p 、Y p 、Z p
(2.1.2) calculating a directional cosine matrix of a geographic coordinate system of the intelligent glasses;
Figure BDA0002361629800000091
wherein the longitude of the intelligent glasses is alpha e And a latitude of delta e ;α e The range of the values of (a) is (-180 DEG, 180 DEG); delta e The range of the values of (a) is (-90 DEG, 90 DEG);
(2.1.3) calculating a direction cosine matrix of a carrier coordinate system of the intelligent glasses;
Figure BDA0002361629800000092
wherein gamma is c For roll angle of the carrier coordinate system relative to the geographical coordinate system, i.e. X c Relative to X g Is included in the plane of the first part;
wherein θ is c For pitch angle of the carrier coordinate system relative to the geographical coordinate system, i.e. Y c With respect to Y g Is included in the plane of the first part;
wherein the method comprises the steps of
Figure BDA0002361629800000093
For heading angle of the carrier coordinate system relative to the geographical coordinate system, i.e. Z c Relative to Z g Is included in the plane of the first part;
(2.1.4) calculating an attitude matrix of a navigation coordinate system of the intelligent glasses;
Figure BDA0002361629800000101
(2.1.5) calculating the attitude matrix angular rate of the navigation coordinate system of the intelligent glasses;
Figure BDA0002361629800000102
ω e omega is the projection of the earth angular velocity in the navigation coordinate system a Is a measured value of the intelligent glasses gyroscope.
The intelligent glasses coordinate calculation is to calculate the real-time matrix through the angular velocity of the intelligent glasses measured by the gyroscope, and the attitude change angular velocity of the intelligent glasses is very fast, so that the time for calculating the attitude matrix needs a period of millisecond order, the real-time requirement cannot be met for the intelligent glasses group, and certain error is necessarily generated for the calculation of the matrix by the non-real-time calculation data. Therefore, the key technology is to calculate the posture of the intelligent glasses in real time, and is one of important factors influencing the accuracy of the intelligent glasses posture calculation algorithm, and the invention calculates a plurality of simultaneous equations, has no singularity, is convenient to calculate, and can be stably executed in a closed-loop switching control system. Compared with the prior art, the invention can give the accurate angular rate of the intelligent glasses through the conversion of the coordinate system, and is mainly superior to the accuracy of the gesture control of the intelligent glasses.
Therefore, the real-time value of the smart glasses attitude angle is:
Figure BDA0002361629800000103
Figure BDA0002361629800000104
Figure BDA0002361629800000105
/>
Figure BDA0002361629800000106
the roll angle of the coordinate system is navigated for the intelligent glasses; />
Figure BDA0002361629800000107
The course angle of the intelligent glasses navigation coordinate system is set; />
Figure BDA0002361629800000108
A pitch angle of a navigation coordinate system of the intelligent glasses;
the gesture matrix and the gesture matrix angular rate are key parameters of the intelligent glasses, and the position of the target point can be calculated more simply in the subsequent adjustment process through the two parameters.
Aiming at the inherent characteristics of the micro inertial device, the invention obtains the optimal attitude angle estimation through a cloud data fusion algorithm. The attitude angle can be obtained by angular velocity integration, but random drift errors exist in the gyroscope of the micro inertial device, the random drift errors can be gradually accumulated along with the increase of time, the carrier can introduce errors caused by linear acceleration on information such as pitch angle and the like in a dynamic environment, and the triaxial magnetometer can generate errors in yaw angle data due to electromagnetic interference. Therefore, the data fusion algorithm provided by the invention takes the measured data as the measured value, and obtains the optimal attitude angle estimation through matrix operation.
The fitness value of the intelligent glasses in each group is evaluated as follows:
Figure BDA0002361629800000109
f j (x) Representing a branching function on a j-th branch on a target path of the intelligent glasses; alpha j The weight corresponding to the jth branch is represented, m is the total number of branches of the target path, and x represents the estimated intelligent glasses label;
f j (x)=(β j ∩γ j )(β j ∪γ j ) -2
β j a set of smart glasses for executing a target path through a j-th branch; gamma ray j The intelligent glasses set is used for inputting the j-th branch which is passed by the tested program after being executed.
The step of evaluating the optimal position in the current search space of the intelligent glasses in each group comprises the following steps of;
(2.3.1) matching the fitness value of each smart glasses with the estimated current local best known position pbest of the smart glasses i,n Comparing, if the adaptation value of the intelligent glasses is small, replacing the adaptation value of the intelligent glasses with the current local best known position pbest i,n
(2.3.2) comparing the current locally-best known position pbest of each smart glasses with the best position gbest in the current search space of the smart glasses, and if the current locally-best known position pbest of the smart glasses is small, replacing the current locally-best known position pbest of the smart glasses with the best position gbest in the current search space.
The estimated current local best known position pbest of the intelligent glasses i,n Comprising:
n intelligent glasses, wherein each intelligent glasses is positioned in a search space of S dimension, S is less than or equal to 3, and the position of the ith intelligent glasses in the nth iteration process of the method is as follows:
Figure BDA0002361629800000111
the corresponding speed of the intelligent glasses is as follows:
Figure BDA0002361629800000112
Figure BDA0002361629800000113
the updating of the speed and the position of each intelligent glasses comprises;
(2.4.1) update G n The inertial weight is:
Figure BDA0002361629800000114
G nmax g is the maximum value of the inertial weight of the intelligent glasses nmin As a minimum value of the inertial weight,
N max is the maximum value of the iteration times;
(2.4.2) update the speed of the smart glasses:
v i,n =p s [v i,n-1 +G n ·(pbest-x i,n-1 )+G n ·(gbest-x i,n-1 )];
p s in order for the contraction factor to be a factor,
Figure BDA0002361629800000115
k is the upper limit speed of network searching;
(2.4.3) updating the position of the smart glasses;
x i,n =x i,n-1 +v i,n
said alpha j Indicating the weight corresponding to the jth branch,
Figure BDA0002361629800000121
γ n representing a value of the number of path consecutive matches greater than 1 in the iterative process.
In order to further accurately control the position of the intelligent glasses, the speed of the intelligent glasses is optimized, and the specific steps comprise:
(2.4.2.1) inputting the Intelligent glasses speed function v i,n Probability utility function
Figure BDA0002361629800000122
Mean function->
Figure BDA0002361629800000123
Step function->
Figure BDA0002361629800000124
Testing duration;
(2.4.2.2) initializing a directivity statistical model;
Figure BDA0002361629800000125
Figure BDA0002361629800000126
search direction for smart glasses, < >>
Figure BDA0002361629800000127
Is a normalized coefficient;
Figure BDA0002361629800000128
Figure BDA0002361629800000129
a Bessel function modified for the n-th order; />
Figure BDA00023616298000001210
(2.4.2.3) calculating the current step function value
Figure BDA00023616298000001211
Figure BDA00023616298000001212
(2.4.2.4) evaluating the smart glasses utility value by a probability utility function;
Figure BDA00023616298000001213
z is an intermediate variable which is used for the preparation of the medicine,
Figure BDA00023616298000001214
φ 1 (Z)、φ 2 (Z) is a standard normal distribution function and a probability density function;
(2.4.2.5) if the smart glasses utility value is below the threshold P, selecting the smart glasses and recording the speed value of the smart glasses;
(2.4.2.6) calculating an corrected effective amount;
Figure BDA00023616298000001215
as a utility metering function of the global search of the intelligent glasses, the expected effective amount of a certain intelligent glasses possibly brought by the global search strategy is measured,
(2.4.2.7) updating (2.4.2.5) the speed of the smart glasses to
Figure BDA0002361629800000131
(2.4.2.8) re-executing step (2.4.2.1) until all smart glasses utility values remain above the threshold P.
(2.5) re-executing the step 2.2 under the working state of the intelligent glasses group, and ending the method when the working of the intelligent glasses group is finished.
As shown in fig. 7 and 8, after route presetting, the intelligent glasses implement a test chart for forming control. In the figure, the ordinate represents the position of one axis (manually set) in the planar coordinate system, and the abscissa represents time in seconds. In fig. 7, it can be seen that in the linear movement, as time increases, the error increases, so that the smart glasses can form more accurate route control in real-time by setting enough iteration conditions (such as time or threshold value). As shown in fig. 8, the error of the present invention is small if it is a curve type motion, because the iteration of the threshold value is continuously adjusted, and because the motion of the smart glasses is mainly a curve type motion, the present invention is most suitable for practical use. For the above data, the invention extracts 1 number of values per 100 time points as shown in the following table:
Time preset value Measurement value
0 0 0
5 0.1275 0.1303
10 0.255 0.2602
15 0.3825 0.3806
20 0.51 0.5005
25 0.6375 0.6404
30 0.765 0.7703
35 0.8925 0.8903
40 1.02 0.986
45 1.1475 1.1504
50 1.275 1.303
55 1.4025 1.4035
60 1.53 1.5026
Table 1 shows the linear trace measurement data
Figure BDA0002361629800000132
/>
Figure BDA0002361629800000141
Table 2 shows the curve trace measurement data
The invention expands research by using a fractional order intelligent glasses control strategy to realize the design and stability analysis of an intelligent glasses control system, optimizes parameters in a control law on the basis of the control law, and omits the problem of manually selecting the parameters. Therefore, the difficulty of controlling the intelligent glasses is simplified, and the stable control of the intelligent glasses is realized. Compared with the prior art, the accurate route is obtained through the particlized fractional order algorithm, and compared with the traditional particlized fractional order algorithm, the accuracy of intelligent glasses control is further improved through introducing the correction effective amount.
Further, the step (3) includes the following steps:
(3.1) extracting the point cloud data M of the outline of the intelligent glasses at the previous time point through the camera system W At point cloud data M W Set m of medium fetch points i ∈M W The method comprises the steps of carrying out a first treatment on the surface of the Simultaneous extraction of m i Corresponding real set
Figure BDA0002361629800000142
(3.2) extracting point cloud data Q of the outline of the intelligent glasses at the current time point by the camera system W At point cloud data Q W Set q of medium fetch points i ∈Q W Let |q i -m i Obtaining a minimum value;
(3.3) calculating a rotation matrix R in a rotation imaging System W And a translation matrix T W
(3.4) by rotating the matrix R W And a translation matrix T W The obtained real set
Figure BDA0002361629800000143
Set m' after changing position and posture i
Figure BDA0002361629800000144
(3.5) calculating m' i And q i Average distance of (2)
Figure BDA0002361629800000145
(3.6) if the average distance
Figure BDA0002361629800000146
Less than or equal to the pose early warning threshold value +.>
Figure BDA0002361629800000147
The target pose is normal, otherwise the communication target is adjusted.
The rotation matrix R in the rotary image pick-up system is calculated W And a translation matrix T W Comprising the following steps:
(3.3.1) calculating centroids of the two sets:
Figure BDA0002361629800000151
/>
Figure BDA0002361629800000152
for set m i The number of points->
Figure BDA0002361629800000153
For set q i The number of points;
(3.3.2) calculating covariance matrices of the two rendezvous points;
Figure BDA0002361629800000154
t is a transposed symbol;
(3.3.3) calculating an antisymmetric matrix;
Figure BDA0002361629800000155
(3.3.4) construction of a 4×4 matrix
Figure BDA0002361629800000156
Figure BDA0002361629800000157
I 3×3 Is a 3 x 3 matrix of units,
Figure BDA0002361629800000158
is the trace of matrix +.>
Figure BDA0002361629800000159
Is a column vector of components of an antisymmetric matrix, a->
Figure BDA00023616298000001510
Feature vector r= [ r ] corresponding to the maximum feature value of (a) 0 ,r 1 ,r 2 ,r 3 ];
(3.3.5) rotating matrix R W The method comprises the following steps:
Figure BDA00023616298000001511
(3.3.6) translation matrix T W The method comprises the following steps:
Figure BDA00023616298000001512
as shown in FIG. 9, the target tracking and gesture monitoring effect graph is obtained through the experiment of the invention, the average error of the method of the invention is 3.25, the times of calculation are 1350 times, and the success rate is 100 percent according to the test simulation
In the step (3), the video images of the intelligent glasses are used for monitoring the pose state of the intelligent glasses, once the threshold value is exceeded, the intelligent glasses can be alerted directly, the stability of the intelligent glasses is effectively improved, the intelligent glasses and the method in the step (2) are mutually coordinated and matched, so that the path planning, advancing and obstacle avoidance of the intelligent glasses can be effectively ensured, and the pose influence caused by the change of the external environment can be effectively avoided.
The medium adopts the road planning method of the intelligent glasses under the artificial intelligent big data to carry out road planning. The road planning equipment of the intelligent glasses under the artificial intelligent big data adopts the road planning medium of the intelligent glasses under the artificial intelligent big data to carry out road planning.
It should be noted that all symbols and special definitions have been explained in the present invention, and other technical features such as simple hardware and some simple parameters are common general knowledge in the art.

Claims (7)

1. The road planning method of the intelligent glasses under the artificial intelligent big data comprises a glasses assembly and a frame, wherein the glasses assembly comprises two lenses, and the frame comprises a glasses frame and two intelligent glasses legs respectively connected to two ends of the glasses frame; the intelligent glasses leg comprises a glasses leg body provided with a containing cavity, a power module in the containing cavity, a camera system, a distance sensor, a height sensor, a processor, a voice recognition module, a voice playing module and a wireless communication module, wherein the camera system, the distance sensor, the height sensor, the processor, the voice recognition module, the voice playing module and the wireless communication module are electrically connected with the power module and are arranged in any containing cavity; the intelligent glasses system comprises a camera system, a distance sensor, a height sensor, a voice recognition module and a voice playing module, wherein the camera system, the distance sensor, the height sensor, the voice recognition module and the voice playing module are all in signal connection with a processor and can be in wireless connection with artificial intelligent interaction equipment and/or in information connection with a cloud control system through a wireless communication module, a power module is a rechargeable battery, the voice recognition module is a receiver, the voice playing module is a microphone, and a path planning and a travel control are carried out on an intelligent glasses set through the intelligent glasses cloud control system and a control module on the intelligent glasses.
(1) The image pickup system of the intelligent glasses collects images, performs denoising treatment on the collected images, and then performs optimal path planning through path edge points;
(2) The intelligent glasses perform travel operation to guide the intelligent glasses user to travel in the planned path channel;
(3) Performing gesture monitoring on the intelligent glasses in the journey;
(4) Repeating steps (2) - (3) until the smart glasses user stops using;
the step (1) comprises the following steps:
(1.1) image acquisition denoising:
the intelligent glasses acquire three-dimensional images of the environment where the path is located through the camera system, and Gaussian filtering processing is carried out on the acquired images;
(1.2) edge detection of a road in an image:
performing Sobel operator calculation on the gray values of adjacent points around each pixel point in the image, selecting a threshold value tau by collecting the brightness taken by the image,
Figure FDA0003708950430000011
f x =(f(a-1,b-1)+2f(a-1,b)+f(a-1,b+1))-(f(a+1,b+1)+2f(a-1,b)+f(a+1,b+1)),
f y =(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1))-(f(a-1,b-1)+2f(a,b-1)+f(a+1,b-1)),
when So (a, b) > tau, the points (a, b) in the image are edge points, a, b are coordinates of the edge points, and f is a gray value;
(1.3) path planning according to edge points:
the cloud control system of the intelligent glasses collects edge point information, a path channel is constructed through the edge points, and an included angle theta between the camera system of the intelligent glasses and the horizontal direction of the distance sensor is set a The data returned by the distance sensor is l a The measurement value of the height sensor of the intelligent glasses is z', and the coordinate under the horizontal reference system is D n0 =(x a ,y b ,z′)=(l a ×cosθ a ,l a ×sinθ a ,z′);
Determining a planned path according to the iteration evaluation value of the nodes in the path channel, wherein the distance between two adjacent nodes is l n0 ,l n0 =D n0 -D n0-1 N0 is the index of the current node, the starting point I 0 To the current node I n0 The cost function of (2) is:
Figure FDA0003708950430000021
scanning two neighbor nodes with maximum G (n 0) near each node to connect to form a planned path channel;
the step (2) comprises:
(2.1) initializing an intelligent glasses group with the number of N, wherein the intelligent glasses group comprises the random speed of the area of the intelligent glasses and the position of the area of the intelligent glasses, and determining the gesture matrix, the angular rate of the gesture matrix and the gesture angle of each intelligent glasses;
(2.2) evaluating the fitness value of the smart glasses in each group;
(2.3) evaluating the best position in the current search space of the smart glasses in each group;
(2.4) updating the inertial weight of the intelligent glasses, and updating the speed and the position of each intelligent glasses;
(2.5) re-executing the step (2.2) under the working state of the intelligent glasses, and ending the method when the working of the intelligent glasses is finished;
the determining step of the gesture matrix of the intelligent glasses comprises the following steps:
(2.1.1) assuming that the earth coordinate system is the earth system, the geographic coordinate system is the geo system, the carrier coordinate system is the carrier system, the navigation coordinate system is the pilot system, and the axes of the coordinate systems are respectively X in turn g 、Y g 、Z g ;X c 、Y c 、Z c ;X p 、Y p 、Z p
(2.1.2) calculating a directional cosine matrix of a geographic coordinate system of the intelligent glasses;
Figure FDA0003708950430000022
wherein the longitude of the intelligent glasses is alpha e And a latitude of delta e ;α e The range of the values of (a) is (-180 DEG, 180 DEG); delta e The range of the values of (a) is (-90 DEG, 90 DEG);
(2.1.3) calculating a direction cosine matrix of a carrier coordinate system of the intelligent glasses;
Figure FDA0003708950430000023
wherein gamma is c For roll angle of the carrier coordinate system relative to the geographical coordinate system, i.e. X c Relative to X g Is included in the plane of the first part;
wherein θ is c For pitch angle of the carrier coordinate system relative to the geographical coordinate system, i.e. Y c With respect to Y g Is included in the plane of the first part;
wherein the method comprises the steps of
Figure FDA0003708950430000031
Is the coordinates of the carrierHeading angle relative to the geographic coordinate system, i.e. Z c Relative to Z g Is included in the plane of the first part;
(2.1.4) calculating an attitude matrix of a navigation coordinate system of the intelligent glasses;
Figure FDA0003708950430000032
(2.1.5) calculating the attitude matrix angular rate of the navigation coordinate system of the intelligent glasses;
Figure FDA0003708950430000033
ω e omega is the projection of the earth angular velocity in the navigation coordinate system a Is a measured value of an intelligent glasses gyroscope;
the real-time value of intelligent glasses attitude angle is:
Figure FDA0003708950430000034
Figure FDA0003708950430000035
Figure FDA0003708950430000036
Figure FDA0003708950430000037
the roll angle of the coordinate system is navigated for the intelligent glasses; />
Figure FDA0003708950430000038
The course angle of the intelligent glasses navigation coordinate system is set; />
Figure FDA0003708950430000039
And navigating a pitch angle of a coordinate system for the intelligent glasses. />
2. The method for road planning for intelligent glasses under artificial intelligence big data according to claim 1, wherein the adaptive values of the intelligent glasses in each group are estimated as follows:
Figure FDA00037089504300000310
f j (x) Representing a branching function on a j-th branch on a target path of the intelligent glasses; alpha j The weight corresponding to the jth branch is represented, m is the total number of branches of the target path, and x represents the estimated intelligent glasses label;
f j (x)=(β j ∩γ j )(β j ∪γ j ) -2
β j a set of smart glasses for executing a target path through a j-th branch; gamma ray j The intelligent glasses set is used for inputting the j-th branch which is passed by the tested program after being executed.
3. The method for road planning for intelligent glasses under artificial intelligence big data according to claim 2, wherein the evaluating the best position in the current search space of the intelligent glasses in each group comprises;
(2.3.1) matching the fitness value of each smart glasses with the estimated current local best known position pbest of the smart glasses i,n Comparing, if the adaptation value of the intelligent glasses is small, replacing the adaptation value of the intelligent glasses with the current local best known position pbest i,n
(2.3.2) comparing the current locally-best known position pbest of each smart glasses with the best position gbest in the current search space of the smart glasses, and if the current locally-best known position pbest of the smart glasses is small, replacing the current locally-best known position pbest of the smart glasses with the best position gbest in the current search space.
4. A method for road planning for intelligent glasses under artificial intelligence big data according to claim 3, characterized in that the estimated current local best known position pbest of intelligent glasses i,n Comprising:
n intelligent glasses, wherein each intelligent glasses is positioned in a search space of S dimension, S is less than or equal to 3, and the position of the ith intelligent glasses in the nth iteration process of the method is as follows:
Figure FDA0003708950430000041
the corresponding speed of the intelligent glasses is as follows: />
Figure FDA0003708950430000042
Figure FDA0003708950430000043
5. The method for road planning for intelligent glasses under artificial intelligence big data according to claim 4, wherein updating the speed and position of each intelligent glasses comprises;
(2.4.1) update G n The inertial weight is:
Figure FDA0003708950430000044
G nmax g is the maximum value of the inertial weight of the intelligent glasses nmin As a minimum value of the inertial weight,
N max is the maximum value of the iteration times;
(2.4.2) update the speed of the smart glasses:
v i,n =p s [v i,n-1 +G n ·(pbest-x i,n-1 )+G n ·(gbest-x i,n-1 )];
p s in order for the contraction factor to be a factor,
Figure FDA0003708950430000045
k is the upper limit speed of network searching;
optimizing the speed of the intelligent glasses, wherein the method comprises the following specific steps of:
(2.4.2.1) inputting the Intelligent glasses speed function v i,n Probability utility function
Figure FDA0003708950430000046
Mean function->
Figure FDA0003708950430000047
Step function
Figure FDA0003708950430000048
Testing duration;
(2.4.2.2) initializing a directivity statistical model;
Figure FDA0003708950430000049
Figure FDA00037089504300000410
search direction for smart glasses, < >>
Figure FDA00037089504300000411
Is a normalized coefficient;
Figure FDA00037089504300000412
Figure FDA0003708950430000051
a Bessel function modified for the n-th order; />
Figure FDA0003708950430000052
(2.4.2.3) calculating the current step function value
Figure FDA0003708950430000053
Figure FDA0003708950430000054
(2.4.2.4) evaluating the smart glasses utility value by a probability utility function;
Figure FDA0003708950430000055
z is an intermediate variable which is used for the preparation of the medicine,
Figure FDA0003708950430000056
φ 1 (Z)、φ 2 (Z) is a standard normal distribution function and a probability density function;
(2.4.2.5) if the smart glasses utility value is below the threshold P, selecting the smart glasses and recording the speed value of the smart glasses;
(2.4.2.6) calculating an corrected effective amount;
Figure FDA0003708950430000057
as a utility metering function of the global search of the intelligent glasses, the expected effective amount of a certain intelligent glasses possibly brought by the global search strategy is measured,
(2.4.2.7) updating (2.4.2.5) the speed of the smart glasses to
Figure FDA0003708950430000058
(2.4.2.8) re-executing step (2.4.2.1) until all smart glasses utility values remain above the threshold P;
(2.4.3) updating the position of the smart glasses;
x i,n =x i,n-1 +v i,n
said alpha j Indicating the weight corresponding to the jth branch,
Figure FDA0003708950430000059
γ n representing a value of the number of path consecutive matches greater than 1 in the iterative process.
6. The method for road planning for intelligent glasses under artificial intelligence big data according to claim 5, wherein the step (3) comprises the steps of:
(3.1) extracting point cloud data M of the intelligent glasses environment reference target at the previous time point through a cloud data system W At point cloud data M W Set m of medium fetch points i ∈M W The method comprises the steps of carrying out a first treatment on the surface of the Simultaneous extraction of m i Corresponding real set
Figure FDA00037089504300000510
/>
(3.2) extracting point cloud data Q of the intelligent glasses environment reference target at the current time point by the camera system W At point cloud data Q W Set q of medium fetch points i ∈Q W Let |q i -m i Obtaining a minimum value;
(3.3) calculating the rotation matrix R in the image capturing System W And a translation matrix T W
(3.4) by rotating the matrix R W And a translation matrix T W The obtained real set
Figure FDA0003708950430000061
Set m 'after changing position and posture' i
Figure FDA0003708950430000062
(3.5) calculation of m' i And q i Average distance of (2)
Figure FDA0003708950430000063
(3.6) if the average distance
Figure FDA0003708950430000064
Less than or equal to the pose early warning threshold value +.>
Figure FDA0003708950430000065
The target pose is normal, otherwise the communication target is adjusted.
7. The method for planning a road for intelligent glasses under artificial intelligence big data according to claim 6, wherein the rotation matrix R in the calculation camera system is W And a translation matrix T W Comprising the following steps:
(3.3.1) calculating centroids of the two sets:
Figure FDA0003708950430000066
Figure FDA0003708950430000067
for set m i The number of points->
Figure FDA0003708950430000068
For set q i The number of points;
(3.3.2) calculating covariance matrices of the two rendezvous points;
Figure FDA0003708950430000069
t is a transposed symbol;
(3.3.3) calculating an antisymmetric matrix;
Figure FDA00037089504300000610
(3.3.4) construction of a 4×4 matrix
Figure FDA00037089504300000611
Figure FDA00037089504300000612
I 3×3 Is a 3 x 3 matrix of units,
Figure FDA00037089504300000613
is the trace of matrix +.>
Figure FDA00037089504300000614
Is a column vector of components of an antisymmetric matrix, a->
Figure FDA00037089504300000615
Figure FDA00037089504300000616
Feature vector r= [ r ] corresponding to the maximum feature value of (a) 0 ,r 1 ,r 2 ,r 3 ];
(3.3.5) rotating matrix R W The method comprises the following steps:
Figure FDA00037089504300000617
(3.3.6) translation matrix T W The method comprises the following steps:
Figure FDA0003708950430000071
/>
CN202010023509.3A 2020-01-09 2020-01-09 Intelligent glasses road planning method, medium and equipment under artificial intelligent big data Active CN111238470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010023509.3A CN111238470B (en) 2020-01-09 2020-01-09 Intelligent glasses road planning method, medium and equipment under artificial intelligent big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010023509.3A CN111238470B (en) 2020-01-09 2020-01-09 Intelligent glasses road planning method, medium and equipment under artificial intelligent big data

Publications (2)

Publication Number Publication Date
CN111238470A CN111238470A (en) 2020-06-05
CN111238470B true CN111238470B (en) 2023-05-02

Family

ID=70872506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010023509.3A Active CN111238470B (en) 2020-01-09 2020-01-09 Intelligent glasses road planning method, medium and equipment under artificial intelligent big data

Country Status (1)

Country Link
CN (1) CN111238470B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108645413A (en) * 2018-06-06 2018-10-12 江苏海事职业技术学院 The dynamic correcting method of positioning and map building while a kind of mobile robot
CN110084825A (en) * 2019-04-16 2019-08-02 上海岚豹智能科技有限公司 A kind of method and system based on image edge information navigation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5658963B2 (en) * 2010-09-29 2015-01-28 オリンパス株式会社 Image processing apparatus, image processing method, and image processing program
CN102175251A (en) * 2011-03-25 2011-09-07 江南大学 Binocular intelligent navigation system
CN105760813A (en) * 2016-01-19 2016-07-13 北京航空航天大学 Unmanned aerial vehicle target detection method based on plant branch and root evolution behaviors
CN106214437B (en) * 2016-07-22 2018-05-29 杭州视氪科技有限公司 A kind of intelligent blind auxiliary eyeglasses
TW201833701A (en) * 2017-03-14 2018-09-16 聯潤科技股份有限公司 Self-propelling cleansing device and method thereof for establishing indoor map comprising a device body driving by wheels at a bottom thereof, a distance detection unit, an along-edge detector, a cleansing unit, a dust collection unit, a dynamic detection unit, a map establishing unit, and a control unit
CN107450576B (en) * 2017-07-24 2020-06-16 哈尔滨工程大学 Method for planning paths of bridge detection unmanned aerial vehicle
US20180150081A1 (en) * 2018-01-24 2018-05-31 GM Global Technology Operations LLC Systems and methods for path planning in autonomous vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108645413A (en) * 2018-06-06 2018-10-12 江苏海事职业技术学院 The dynamic correcting method of positioning and map building while a kind of mobile robot
CN110084825A (en) * 2019-04-16 2019-08-02 上海岚豹智能科技有限公司 A kind of method and system based on image edge information navigation

Also Published As

Publication number Publication date
CN111238470A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN107635204B (en) Indoor fusion positioning method and device assisted by exercise behaviors and storage medium
CN107451593B (en) High-precision GPS positioning method based on image feature points
CN106295512B (en) Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN106125087A (en) Dancing Robot indoor based on laser radar pedestrian tracting method
CN104881029B (en) Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms
CN109186606A (en) A kind of robot composition and air navigation aid based on SLAM and image information
CN106767828A (en) A kind of mobile phone indoor positioning solution
CN106767791A (en) A kind of inertia/visual combination air navigation aid using the CKF based on particle group optimizing
CN109029429B (en) WiFi and geomagnetic fingerprint based multi-classifier global dynamic fusion positioning method
CN111277946A (en) Fingerprint database self-adaptive updating method in Bluetooth indoor positioning system
CN110536245A (en) A kind of indoor wireless positioning method and system based on deep learning
CN111698774B (en) Indoor positioning method and device based on multi-source information fusion
CN103994765A (en) Positioning method of inertial sensor
CN112325883A (en) Indoor positioning method for mobile robot with WiFi and visual multi-source integration
CN111123953A (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
Deng et al. Long-range binocular vision target geolocation using handheld electronic devices in outdoor environment
CN108152812B (en) Improved AGIMM tracking method for adjusting grid spacing
CN113566820A (en) Fusion pedestrian positioning method based on position fingerprint and PDR algorithm
WO2022242018A1 (en) Indoor target positioning method based on improved cnn model
CN112365592B (en) Local environment feature description method based on bidirectional elevation model
CN111208820B (en) Particle unmanned vehicle set under artificial intelligence big data, control method and medium
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
CN111238470B (en) Intelligent glasses road planning method, medium and equipment under artificial intelligent big data
Wang et al. Indoor position algorithm based on the fusion of wifi and image
CN109803234A (en) Unsupervised fusion and positioning method based on the constraint of weight different degree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230331

Address after: 510000 Building 1, No. 106 Fengze East Road, Nansha District, Guangzhou City, Guangdong Province X1301-B8630 (Cluster Registration) (JM)

Applicant after: Zhongyun (Guangdong) Information Technology Co.,Ltd.

Address before: 150001 No. 145-1, Nantong Avenue, Nangang District, Heilongjiang, Harbin

Applicant before: HARBIN ENGINEERING University

GR01 Patent grant
GR01 Patent grant