CN114260890B - Method and device for determining state of robot, robot and storage medium - Google Patents

Method and device for determining state of robot, robot and storage medium Download PDF

Info

Publication number
CN114260890B
CN114260890B CN202111515984.3A CN202111515984A CN114260890B CN 114260890 B CN114260890 B CN 114260890B CN 202111515984 A CN202111515984 A CN 202111515984A CN 114260890 B CN114260890 B CN 114260890B
Authority
CN
China
Prior art keywords
robot
state
information
noise
actual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111515984.3A
Other languages
Chinese (zh)
Other versions
CN114260890A (en
Inventor
姚达琛
何悦
李�诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111515984.3A priority Critical patent/CN114260890B/en
Publication of CN114260890A publication Critical patent/CN114260890A/en
Application granted granted Critical
Publication of CN114260890B publication Critical patent/CN114260890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a state determining method and device of a robot, the robot and a storage medium, wherein the state determining method of the robot comprises the following steps: acquiring reference information of a robot; wherein the reference information includes at least one of: measuring state information of the robot corresponding to a plurality of moments and actual running information of the robot corresponding to the current moment; determining state noise of the robot based on the reference information; and obtaining the actual state information of the robot corresponding to the current moment by using the state noise. By means of the scheme, the accuracy of the state determination of the robot can be improved.

Description

Method and device for determining state of robot, robot and storage medium
Technical Field
The application discloses a method and a device for determining the state of a robot, the robot and a storage medium, and a divisional application of patent application with application number 2020108726623, which are proposed by applicant on 8/26/2020, and relates to the technical field of robots, in particular to a method and a device for determining the state of a robot, a robot and a storage medium.
Background
With the development of electronic technology and computer technology, robots are being applied to express delivery, service guidance, hotel meal delivery, and the like, and have been receiving widespread attention.
However, the robot is inevitably disturbed during the running process, for example, white noise widely existing in free space and even disturbing signals, so that the normal running of the robot is affected, and even the robot is out of control, slipped and the like when serious. In view of this, how to improve the accuracy of the robot state determination is a problem to be solved.
Disclosure of Invention
The application provides a state determining method and device of a robot, the robot and a storage medium.
The first aspect of the present application provides a method for determining a state of a robot, including: acquiring reference information of a robot; wherein the reference information includes at least one of: measuring state information of the robot corresponding to a plurality of moments and actual running information of the robot corresponding to the current moment; determining state noise of the robot based on the reference information; obtaining actual state information of the robot corresponding to the current moment by using the state noise; the actual running information comprises running angle information, motor driving information and running speed information of the robot; the obtaining of the actual state information of the robot corresponding to the current moment by using the state noise comprises the following steps: obtaining state transition noise of the robot by using at least one of the first state noise and the second state noise; the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information.
Thus, by obtaining reference information for the robot, the reference information includes at least one of: the state noise of the robot is determined based on the reference information, so that the state noise is utilized to obtain the actual state information of the robot corresponding to the current moment, and a large number of particles are not required to be utilized for simulation in the state determining process, thereby being beneficial to improving the state determining speed. In addition, the state noise is determined according to measurement state information at a plurality of moments and/or actual running information at the current moment, and the state noise can be measured from an external measurement angle of the robot and/or a state angle of the robot, so that the state noise is more close to the actual situation, and further the accuracy of the subsequently determined actual state information is improved. In addition, the actual traveling information is set to include traveling angle information, motor driving information, and traveling speed information of the robot, so that the state transition noise of the robot is obtained by using at least one of the first state noise and the second state noise, wherein the first state noise is determined by using the traveling angle information and the traveling speed information, and the second state noise is determined by using the motor driving information and the traveling speed information, thereby being capable of being beneficial to improving the accuracy of the state transition noise.
Wherein determining the state noise of the robot based on the reference information comprises: and determining measurement interference noise of the robot by using measurement state information corresponding to the current moment and a plurality of moments before the current moment.
Therefore, the measurement interference noise of the robot is determined by using the measurement state information corresponding to the current time and a plurality of times before the current time, so that the noise of the robot can be determined from the external measurement angle, and the external interference of the robot in the running process can be measured.
Wherein, with the measurement state information corresponding to the current time and a plurality of times before it, confirm the measurement interference noise of robot includes: acquiring the discrete degree of measurement state information at the current moment and a plurality of moments before the current moment; the degree of dispersion is used to determine the measured interference noise.
Therefore, the external interference of the robot in the running process can be accurately measured by utilizing the discrete degree of the measurement state information of the current moment and a plurality of moments before the current moment and determining the measurement interference noise by utilizing the discrete degree.
The discrete degree of the measurement state information at the current moment and a plurality of moments before the current moment is the standard deviation of the measurement state information at the current moment and a plurality of moments before the current moment; and/or, determining measurement interference noise using the degree of discretization, comprising: taking the product between the discrete degree and the preset gain parameter as the measured interference noise.
Therefore, by setting the degree of dispersion of the measurement state information at the present time and a plurality of times before the present time as the standard deviation of the measurement state information at the present time and a plurality of times before the present time, the complexity and the calculated amount of determining the degree of dispersion can be reduced, and the speed of determining the state can be improved; the product between the discrete degree and the preset gain parameter is used as the measurement interference noise, so that the accuracy of the measurement interference noise can be improved, and the accuracy of state determination can be improved.
The robot comprises driving wheels and steering wheels, wherein the driving wheels are used for driving the robot to run, the steering wheels are used for changing the running direction of the robot, the running speed information comprises actual speed difference among the driving wheels of the robot, and the running angle information comprises actual steering angles of the steering wheels of the robot; before obtaining the state transition noise of the robot by using at least one of the first state noise and the second state noise, the method further comprises: mapping the actual steering angle by using a first mapping relation between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle; determining a first state noise by utilizing the difference between the actual speed difference and the theoretical speed difference; and/or the robot comprises a driving wheel, wherein the driving wheel is used for driving the robot to run, the running speed information comprises the actual average speed of the driving wheel of the robot, and the motor driving information comprises the actual average driving signal value of the motor of the robot; before obtaining the state transition noise of the robot by using at least one of the first state noise and the second state noise, the method further comprises: mapping the actual average driving signal value by using a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value; the second state noise is determined using the difference between the actual average speed and the theoretical average speed.
Therefore, the robot comprises driving wheels and steering wheels, the driving wheels are used for driving the robot to run, the steering wheels are used for changing the running direction of the robot, running speed information is set to comprise actual speed differences among the driving wheels of the robot, running angle information is set to comprise actual steering angles of the steering wheels of the robot, the actual steering angles are mapped by using a first mapping relation between the speed differences and the steering angles, theoretical speed differences corresponding to the actual steering angles are obtained, and first state noise is determined by using differences between the actual speed differences and the theoretical speed differences, so that the first state noise of the robot can be determined from the angles of the steering wheels of the robot; the robot comprises a driving wheel, the driving wheel is used for driving the robot to run, running speed information is set to comprise the actual average speed of the driving wheel of the robot, motor driving information is set to comprise the actual average driving signal value of the motor of the robot, so that the actual average driving signal value is mapped by utilizing a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value, and second state noise is determined by utilizing the difference between the actual average speed and the theoretical average speed, so that the second state noise of the robot can be determined from the angle of the driving wheel of the robot.
Wherein determining the first state noise using the difference between the actual speed difference and the theoretical speed difference comprises: taking the square of the difference between the actual speed difference and the theoretical speed difference as first state noise; determining a second state noise using a difference between the actual average speed and the theoretical average speed, comprising: the square of the difference between the actual average speed and the theoretical average speed is taken as the second state noise.
Therefore, the square of the difference between the actual speed difference and the theoretical speed difference is taken as the first state noise, and the square of the difference between the actual average speed and the theoretical average speed is taken as the second state noise, so that the complexity and the calculated amount of calculation of the first state noise and the second state noise can be reduced, and the speed of state determination can be improved.
The obtaining actual state information of the robot corresponding to the current moment by using the state noise comprises the following steps: and processing the actual state information of the robot at the previous moment and the measured state information at the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
Therefore, the state noise is utilized to process the measured state information of the robot corresponding to the current time and the actual state information corresponding to the previous time, so that the robot can balance the measured state information at the current time and the actual state information at the previous time, the actual state information obtained by determination is corrected relative to the measured state information, and the accuracy of the determination of the state of the robot can be improved.
The method for processing the actual state information of the robot corresponding to the previous moment and the measured state information of the current moment by using the state noise comprises the following steps: and determining a filtering gain based on the state noise, predicting the actual state information of the robot corresponding to the previous moment and the actual running information of the previous moment by using Kalman filtering of the filtering gain to obtain the predicted state information corresponding to the current moment, and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
Therefore, the filtering gain is determined based on the state noise, the Kalman filtering of the filtering gain is utilized to predict the actual state information of the robot corresponding to the previous moment and the actual running information of the robot corresponding to the previous moment, the predicted state information corresponding to the current moment is obtained, and the predicted state information of the current moment and the measured state information of the current moment are fused, so that the robustness to external signals can be enhanced, and the actual state information corresponding to the current moment can be accurately determined.
The state noise comprises state transition noise and measurement interference noise, and the obtaining of the actual state information of the robot corresponding to the current moment by using the state noise comprises the following steps: the posterior estimation covariance corresponding to the previous moment is processed by using the state transition parameters and the state transition noise of the robot, so that the prior estimation covariance corresponding to the current moment is obtained; processing the prior estimated covariance corresponding to the current moment by utilizing the transformation parameters between the state information and the measurement interference noise to obtain the filtering gain corresponding to the current moment; the state transition parameters and the input state transition parameters of the robot are utilized to respectively process the actual state information of the robot corresponding to the previous moment and the actual running information of the robot at the previous moment, so as to obtain the predicted state information corresponding to the current moment; and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
The method further comprises the steps of after fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment: updating the prior estimation covariance corresponding to the current moment by utilizing the filtering gain and the transformation parameter to obtain the posterior estimation covariance corresponding to the current moment, and re-executing the step of processing the posterior estimation covariance corresponding to the previous moment by utilizing the state transition parameter and the state transition noise of the robot to obtain the prior estimation covariance corresponding to the current moment and the subsequent step to determine the actual state information corresponding to the next moment.
Therefore, the prior estimated covariance corresponding to the current moment is updated to obtain the posterior estimated covariance corresponding to the current moment, so that the steps are repeated to determine the actual state information corresponding to the next moment, and the actual state information corresponding to each moment of the robot can be determined in the driving process of the robot by circulating the steps.
Wherein after determining the state noise of the robot based on the reference information, the method further comprises: and if the state noise does not meet the preset noise condition, carrying out preset prompt.
Therefore, when the state noise does not meet the preset noise condition, the preset prompt is performed, so that the user can perceive abnormal state noise, and the user experience is improved.
Wherein the state noise includes: the measurement interference noise obtained by using the measurement state information of the current moment and a plurality of moments before the current moment, and the preset noise conditions comprise: measuring that the interference noise is smaller than a first noise threshold, and if the state noise does not meet the preset noise condition, performing the preset prompt comprises: if the measured interference noise does not meet the preset noise condition, outputting a first early warning message to prompt that the state measurement is interfered; and/or, the state noise includes: the state transition noise obtained by using the actual running information at the current moment, and the preset noise conditions comprise: the state transition noise is less than the second noise threshold; if the state noise does not meet the preset noise condition, the step of carrying out the preset prompt comprises the following steps: if the state transition noise does not meet the preset noise condition, outputting a second early warning message to prompt the robot that the vehicle body slip risk exists.
Therefore, when the measurement interference noise does not meet the preset condition, a first early warning message is output to prompt that the state measurement is interfered, so that a user can timely perceive when the state measurement is interfered, and the user experience is improved; when the state transition noise does not meet the preset condition, a second early warning message is output to prompt the robot that the robot has a body slipping risk, so that a user can timely perceive when the robot has the body slipping risk, and user experience is improved.
Wherein, obtain the reference information of robot, include: image acquisition is carried out on the surrounding environment of the robot, so that environment image data corresponding to the current moment is obtained; based on the environmental image data at the current moment, determining measurement state information of the robot corresponding to the current moment; the measured state information and the actual state information each include at least one of: the position of the robot, the pose of the robot, the speed of the robot.
Therefore, the environment image household number corresponding to the current moment is obtained by collecting the images of the surrounding environment of the robot, the measurement state information of the robot corresponding to the current moment is determined based on the environment image data of the current moment, and the measurement state information and the actual state information are set to at least one of the position of the robot, the gesture of the robot and the speed of the robot, so that the measurement state information of the robot corresponding to the current moment can be rapidly obtained, and the speed of the state determination of the robot can be improved.
A second aspect of the present application provides a state determining apparatus of a robot, including: the system comprises a measurement state acquisition module, a state noise determination module and an actual state acquisition module, wherein the measurement state acquisition module is used for acquiring reference information of the robot; wherein the reference information includes at least one of: the robot corresponds to measurement state information of a plurality of moments, and the robot corresponds to actual running information of the current moment; the state noise determining module is used for determining state noise of the robot based on the reference information; the actual state acquisition module is used for acquiring actual state information of the robot corresponding to the current moment by using pose noise; the state noise determining module comprises a state transition determining sub-module, and the actual running information comprises running angle information, motor driving information and running speed information of the robot; the state transition determining submodule is used for obtaining state transition noise of the robot by utilizing at least one of the first state noise and the second state noise; the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information.
A third aspect of the present application provides a robot including a robot body, and a memory and a processor provided on the robot body, the memory and the processor being coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the state determining method in the first aspect.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the state determining method of the first aspect described above.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following: the state noise of the robot is determined based on the reference information, so that the state noise is utilized to obtain the actual state information of the robot corresponding to the current moment, and a large number of particles are not required to be utilized for simulation in the state determining process, thereby being beneficial to improving the state determining speed. In addition, the state noise is determined according to the obtained measurement state information and/or the current actual running information, and the state noise can be measured from the external measurement angle of the robot and/or the state angle of the robot, so that the state noise is more close to the actual situation, and the accuracy of the actual state information determined later is improved. In addition, the actual traveling information is set to include traveling angle information, motor driving information, and traveling speed information of the robot, so that the state transition noise of the robot is obtained by using at least one of the first state noise and the second state noise, wherein the first state noise is determined by using the traveling angle information and the traveling speed information, and the second state noise is determined by using the motor driving information and the traveling speed information, thereby being capable of being beneficial to improving the accuracy of the state transition noise.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for determining a state of a robot according to the present application;
FIG. 2 is a flow chart for determining actual state information of a robot using Kalman filtering;
FIG. 3 is a schematic diagram of a frame of an embodiment of a state determining apparatus of the robot of the present application;
FIG. 4 is a schematic diagram of a frame of an embodiment of a robot of the present application;
FIG. 5 is a schematic diagram of a frame of an embodiment of a computer readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for determining a state of a robot according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring reference information of the robot.
In an embodiment of the present disclosure, the reference information of the robot may include at least one of: the measuring state information of the robot corresponding to a plurality of moments and the actual running information of the robot corresponding to the current moment.
It should be noted that the state of the robot may change at different times, for example, the robot moves at the current time relative to the previous time, and of course, in other application scenarios, the state of the robot may not change, and may be specifically determined according to the actual operation situation of the robot. For this, the robot needs to determine its actual state at different moments in time in order to perform subsequent operations.
In the embodiment of the present disclosure, in order to determine the actual state information of the robot corresponding to the current time, the measurement state information corresponding to the current time may be obtained first, so that based on the steps in the embodiment of the present disclosure, the actual state information corresponding to the current time is obtained by using the measurement state information corresponding to the current time. It will be appreciated that the information described herein corresponding to a time is not necessarily obtained at that time, but may be obtained near that time. For example, the measurement state information corresponding to the current time may be acquired at the current time, and in consideration of the communication time delay, the measurement state information corresponding to the current time may be acquired at a plurality of times (for example, the first 0.5 seconds, the first 1 second, etc.) before the current time, which is not limited herein.
In one disclosed implementation scenario, the measurement state information is obtained by performing state measurement on the robot, in one disclosed implementation scenario, the surrounding environment of the robot can be collected to obtain environment image data corresponding to the current moment, and the measurement state information of the robot corresponding to the current moment is determined based on the environment image data of the current moment. For example, image acquisition of the robot surroundings may be performed by an imaging device installed in the robot running environment; alternatively, the image of the surrounding environment may be acquired by an imaging device mounted on the robot, which is not limited herein. Specifically, in order to accurately describe the state of the robot, the measured state information and the actual state information of the robot may include: at least one of a position of the robot, a state of the robot, and a speed of the robot. For example, the position of the robot may include position coordinates (e.g., longitude and latitude) where the robot is located, and the state of the robot may include a robot travel state (e.g., acceleration). Taking the example that the measurement state information includes the position of the robot and the speed of the robot as an example, for convenience of description, the measurement state information corresponding to the current time may be expressed as:
In the above, z k The measurement state information of the robot corresponding to the current time k is represented, p represents the position of the robot in the measurement state information, and v represents the speed of the robot in the measurement state information.
Similarly, taking an example in which the actual state information includes the position of the robot and the speed of the robot, for convenience of description, the actual state information corresponding to the current time may be expressed as:
in the above-mentioned method, the step of,the actual state information of the robot corresponding to the current time k is represented, p 'represents the position of the robot in the actual state information, and v' represents the speed of the robot in the actual state information.
In addition, the measurement state information of the robot corresponding to the plurality of times may specifically include measurement state information of the robot corresponding to the current time and a plurality of times before the current time. Taking the current moment as the k moment as an example, a plurality of moments before the current moment can be specifically expressed as n moments before the k moment, and the value of n can be set according to the actual application requirement. For example, n may be 5, 10, 15, etc., and is not limited herein.
In another disclosed implementation scenario, the actual travel information may include travel angle information, motor drive information, and travel speed information of the robot. Specifically, the driving angle information may be obtained from a steering engine control record of the robot, the robot may include steering wheels, the steering engine of the robot is used for driving the steering wheels of the robot to steer at a certain angle, the motor driving information may be obtained from a motor control record of the robot, the robot may further include driving wheels, the motor of the robot is used for driving the driving wheels of the robot to move at a certain speed, and the driving speed information may be obtained from a robot encoder.
Step S12: based on the reference information, a state noise of the robot is determined.
The state noise of the robot means noise that affects the state of the robot during traveling, for example, state transition noise generated when the robot transitions from one state to another state; or, the measurement interference noise generated in the process of receiving the measurement state information by the robot is not limited herein.
In one disclosed implementation scenario, measurement interference noise of a robot may be determined using measurement state information corresponding to a current time and several times before the current time. Several moments before the current moment may refer to the foregoing description, and will not be described in detail here. . Accordingly, the noise of the robot can be determined from the external measurement angle, and the disturbance of the robot during the running process can be measured.
In a specific disclosed implementation scenario, the degree of dispersion of the measurement state information at the current time and several times before the current time may be obtained, and the measurement interference noise may be determined by using the degree of dispersion. Specifically, the degree of dispersion of the measurement state information at the current time and the times before the current time may be the standard deviation of the measurement state information at the current time and the times before the current time, or the degree of dispersion of the measurement state information at the current time and the times before the current time may be the variance of the measurement state information at the current time and the times before the current time, which is not limited herein. Therefore, the complexity and the calculation amount of determining the discrete degree can be reduced, and the speed of determining the state can be improved.
In another specific disclosed implementation scenario, the product between the degree of dispersion and the preset gain parameter may also be used as the measured interference noise. The preset gain parameter may be set in actual situations, which is not limited herein. Specifically, the measurement interference noise can be expressed as:
R=K R σ(z k-n:k )
in the above formula, R represents measurement interference noise, z k-n:k Representing measurement state information corresponding to the current time k and n times preceding the current time k, σ (z) k-n:k ) Representing standard deviation of measurement state information corresponding to current time K and n times before current time K R The preset gain parameter may be a value greater than 0, for example, 0.5, 1, 1.5, etc., which is not limited herein.
In another disclosed implementation scenario, the state transition noise of the robot may be determined using actual travel information at the current time. Taking the current moment as the moment k as an example, the actual running information at the moment k can be used for determining the state transition noise of the robot, so that the noise of the robot can be determined from the angle of the state of the robot, and the internal interference of the robot in the running process can be measured. Specifically, the state transition noise of the robot may be obtained from at least one of a first state noise obtained by using the travel angle information and the acceleration information and a second state noise determined by using the motor drive information and the travel speed information.
In a specific implementation scenario, the steering engine angle may be considered, and the driving angle information and the driving speed information may be used to determine the first state noise of the robot, so that the state transition noise of the robot is determined by using the first state noise. For example, it is possible to determine the first state noise of the robot using only the travel angle information and the travel speed information, and use the first state noise as the state transition noise of the robot.
In another specific implementation scenario, the second state noise of the robot may be determined from the motor perspective by using the motor driving information and the travel speed information, so that the state transition noise of the robot may be determined by using the second state. For example, the second state noise of the robot may be determined by using only the motor drive information and the travel speed information, and the second state noise may be used as the state transition noise of the robot.
In still another specific implementation scenario, the first state noise of the robot may be determined by using the driving angle information and the driving speed information, and the second state noise of the robot may be determined by using the motor driving information and the driving speed information, so that the state transition noise of the robot may be obtained by using the first state noise and the second state noise, and further, the steering engine angle and the motor angle may be considered at the same time, which is beneficial to improving the accuracy of the state transition noise.
Specifically, in the case of obtaining the state transition noise by using the first state noise and the second state noise, the first state noise and the second state noise may be weighted to obtain the state transition noise. In addition, the weights corresponding to the first state noise and the second state noise may be set according to actual situations, for example, when the first state noise is important relative to the second state noise, the weight corresponding to the first state noise may be set to be greater than the weight corresponding to the second state noise; alternatively, when the second state noise is important with respect to the first state noise, the weight corresponding to the second state noise may be set to be larger than the weight corresponding to the first state noise, and in addition, the weight corresponding to the first state noise may be set to be equal to the weight corresponding to the second state noise, for example, the weight corresponding to the first state noise is set to 0.5, and the weight corresponding to the second state noise is also set to 0.5.
Specifically, the robot may include a driving wheel for driving the robot to travel and a steering wheel for changing a traveling direction of the robot, and the traveling speed information may include an actual speed difference between the driving wheels of the robot, for example, the robot includes two driving wheels, and the speed difference between the two driving wheels is the actual speed difference, and for convenience of description, the actual speed difference may be expressed as e w The driving angle information may include an actual steering angle of the steering wheel of the robot, and for convenience of description, the actual steering angle may be expressed as α, and a first mapping relationship between a speed difference and the steering angle may be utilized (for convenience of description, the first mapping may be performedThe relationship is denoted as f 1 ) The actual steering angle α is mapped to obtain a theoretical speed difference corresponding to the actual steering angle α (for convenience of description, the theoretical speed difference may be expressed as f 1 (α)) so that the actual speed difference e can be utilized w And a theoretical speed difference f 1 The difference between (alpha) determines a first state noise, e.g. the actual speed difference e w And a theoretical speed difference f 1 The square of the difference between (α) is taken as the first state noise. The first mapping relationship may be obtained by performing statistical analysis on a plurality of pairs of speed differences and steering angles acquired in advance, for example, in a normal running process of the robot, acquiring M pairs of speed differences and steering angles, and fitting the acquired M pairs of speed differences and steering angles to obtain a first mapping relationship between the speed differences and the steering angles, where specific values of M may be set according to actual conditions, and are not limited herein.
Specifically, the driving speed information may further include an actual average speed of the driving wheels, i.e., a speed average value of each driving wheel of the robot. For example, the robot includes two driving wheels, and the average speed of the two driving wheels is the actual average speed, which may be denoted as v for convenience of description w The motor driving information may include an actual average driving signal value of the motor of the robot, that is, a signal average value of the motors corresponding to each driving wheel of the robot, for example, the robot includes two driving wheels, and when the driving signals are pulse width modulation signals, the actual average driving signal value may be a pulse width modulation signal average value of the motors corresponding to the two driving wheels, and for convenience of description, the average driving signal value may be expressed as p w A second mapping relationship between the average speed and the average drive signal value may be utilized (for convenience of description, the second mapping relationship may be expressed as f 2 ) The actual average driving signal value is mapped to obtain a theoretical average speed corresponding to the actual average driving signal value (for convenience of description, the theoretical average speed may be expressed as f 2 (p w ) So that the difference between the actual average speed and the theoretical average speed can be used to determine the second state noise, e.g. the actual average speed can be smoothed Average velocity v w And a theoretical average speed f 2 (p w ) The square of the difference between them acts as the second state noise. The second mapping relation can be obtained by carrying out statistical analysis on a plurality of pairs of average speeds and average driving signal values acquired in advance. For example, in the normal running process of the robot, N pairs of average speed and average driving signal value are collected, and N pairs of average speed and average driving signal value are fitted to obtain a second mapping relationship between average speed and average driving signal value, where the specific value of N may be set according to the actual situation, and is not limited herein.
Through the above steps, the state transition noise of the robot can be obtained, specifically, can be expressed as:
Q=k 1 (f 1 (α)-e w ) 2 +k 2 (f 2 (p w )-v w ) 2
in the above description, Q represents the state transition noise of the robot, k 1 Represents the weight value, k, corresponding to the first state noise 2 Representing the weight corresponding to the second state noise, (f) 1 (α)-e w ) 2 Representing the first state noise, (f) 2 (p w )-v w ) 2 Representing the second state noise, e w Representing the actual speed difference, f 1 Represents a first mapping relation, alpha represents an actual steering angle, v w Represents the actual average speed, f 2 Representing a second mapping relationship, p w Representing the average drive signal value.
In one disclosed implementation scenario, state transition noise and measurement interference noise may be obtained through the above steps; or, in practical application, the state transition noise may be obtained through the steps according to the actual situation, and the measurement interference noise is set to a fixed value, if the ideal situation is considered, the measurement interference noise may be set to 0, that is, the state transition noise may be directly used as the state noise of the robot, if the measurement interference noise may also be set to non-zero values such as 1, 2, 3, etc., and if the measurement interference noise may also be set to white noise, which is not limited herein; alternatively, the measurement interference noise may be obtained through the above steps according to the actual situation, and the state transition noise may be set to a fixed value, if the ideal situation is considered, the state transition noise may be set to 0, that is, the measurement interference noise may be directly used as the state noise of the robot, or the state transition noise may be set to a non-zero value such as 1, 2, 3, or the like, or the state transition noise may be set to white noise, which is not limited herein.
Step S13: and obtaining the actual state information of the robot corresponding to the current moment by using the state noise.
Specifically, the actual state information of the robot at the previous moment and the measured state information of the current moment can be processed by using the state noise to obtain the actual state information of the robot at the current moment. For example, the actual state information of the robot corresponding to the previous time and the measured state information of the current time may be processed by combining the kalman filtering and the state noise to obtain the actual state information of the robot corresponding to the current time.
In one disclosed implementation scenario, the filtering gain may be determined based on the state noise, and the actual state information of the robot corresponding to the previous time and the actual running information of the previous time may be predicted by using the kalman filtering of the filtering gain to obtain the predicted state information corresponding to the current time, and the predicted state information of the current time and the measured state information of the current time may be fused to obtain the actual state information of the robot corresponding to the current time.
In a specific disclosure implementation scenario, please refer to fig. 2 in combination, fig. 2 is a schematic flow chart of determining actual state information of a robot by using kalman filtering, specifically, the actual state information of the robot may be determined by using kalman filtering by:
Step S21: and processing the posterior estimation covariance corresponding to the previous moment by using the state transition parameters and the state transition noise of the robot to obtain the prior estimation covariance corresponding to the current moment.
Specifically, taking the example that the current time is k time and the previous time of the current time is k-1 time, it can be expressed as:
P k- =AP k-1 A T +Q
in the above, P k- Representing a priori estimated covariance corresponding to the current time, P k-1 Representing a posterior estimated covariance corresponding to a previous time instant, the posterior estimated covariance representing actual state information at the previous time instantCovariance of (a), i.e. actual state information of the previous moment +.>Is not determined by the degree of uncertainty of (2). It should be noted that the implementation scenario of the present disclosure describes that the actual state information corresponding to the current time is acquired +.>Therefore, the actual status information +.>The steps disclosed in the implementation scenario may be referred to, and will not be described herein. The specific manner of obtaining the posterior estimated covariance may be referred to later in the embodiments of the disclosure, which is not described herein. Further, a represents a state transition parameter of the robot in a matrix form, and the state transition parameter a is used to represent a motion model of the robot. For example, the state transition parameter a may be used to indicate that the robot accelerates at a certain acceleration or that the robot moves at a certain speed, and may be specifically set by the user, a T The transpose of the state transition parameter is represented, Q represents the state transition noise, and the specific calculation method can be referred to the related description, which is not repeated here.
Step S22: and processing the prior estimated covariance corresponding to the current moment by using the conversion parameters from the state information to the measurement information and the measurement interference noise to obtain the filter gain corresponding to the current moment.
Specifically, taking the example that the current time is the k time and the previous time of the current time is the k-1 time, it can be expressed as:
in the above, K k Representing the filtering gain corresponding to the current moment, H representing a transformation parameter in the form of a matrix, the transformation parameter H being used to describe the transformation relationship between the actual state information and the measured state information, e.g. being used to describe the actual state information and the measured state information as a linear relationship, in particular the transformation parameter H being settable by a user, e.g. the transformation parameter H being settable as a unity matrix, not limited thereto, H T Representing the transpose of the transform parameters, R representing the measured interference noise, the specific calculation method can be referred to the related description above, and will not be repeated here, P k- Representing a priori estimated covariance corresponding to the current time, the a priori estimated covariance P k- Representing predicted state information corresponding to a current time Covariance of (i.e. predicted state information corresponding to the current moment +)>The uncertainty of (2) may be specifically calculated by referring to the foregoing related description, and will not be described herein.
Thus, by measuring the interference noise and the state transition noise, the filter gain corresponding to the current time can be determined. Specifically, at least one of the measurement interference noise and the state transition noise is calculated by the foregoing steps, for example, the measurement interference noise is calculated by the foregoing steps, or the state transition noise is calculated by the foregoing steps, or both the measurement interference noise and the state transition noise are calculated by the foregoing steps, which is not limited herein.
Step S23: and respectively processing the actual state information of the robot corresponding to the previous moment and the actual running information of the robot at the previous moment by using the state transition parameters of the robot and the input state transition parameters to obtain the predicted state information corresponding to the current moment.
Specifically, taking the example that the current time is the k time and the previous time of the current time is the k-1 time, it can be expressed as:
in the above-mentioned method, the step of,representing predicted status information corresponding to the current time, +.>Representing the actual status information corresponding to the previous moment, as described in the foregoing, the present disclosure implementation scenario describes that the actual status information corresponding to the current moment is acquired +. >Therefore, the actual status information +.>The steps disclosed in the implementation scenario may be referred to, and will not be described herein. In particular, when k is equal to 0, the actual status information +.>Can be initialized to be set to 0, u k-1 The actual running information corresponding to the previous moment is indicated, the actual running information may include running angle information, motor driving information and running speed information of the robot, the specific acquisition mode may refer to the related description in the foregoing disclosed embodiment, and not be repeated herein, a indicates a state transition parameter of the robot, and may refer to the foregoing description, and not be repeated herein, B indicates an input state transition parameter, and the input state transition parameter B is used for describing a conversion relationship between the input actual running information and the state information, so that the input actual running information may be converted into the state information by the input state transition parameter B, and then is matched with the machineThe actual state information of the robot corresponding to the previous moment is combined to obtain the predicted state information of the robot corresponding to the current moment, namely, the state information of the robot corresponding to the current moment in theory.
Step S24: and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
Specifically, taking the example that the current time is the k time and the previous time of the current time is the k-1 time, it can be expressed as:
in the above-mentioned method, the step of,representing the actual state information corresponding to the current moment +.>Representing prediction state information corresponding to the current time, K k Representing the filter gain, z, corresponding to the current time instant k Corresponding to the measurement state information at the current moment, H represents the transformation parameter from the state information to the measurement information, and can be specifically described in the related description, and is not described in detail herein, namely +.>Representing the residual error between the measured state information and the predicted state information, using the filter gain K k And correcting the residual error by predicting state information to obtain the actual state information of the robot corresponding to the current moment>
Step S25: and updating the prior estimation covariance corresponding to the current moment by using the filtering gain and the transformation parameter to obtain the posterior estimation covariance corresponding to the current moment.
Specifically, taking the example that the current time is the k time and the previous time of the current time is the k-1 time, it can be expressed as:
P k =(I-K k H)P k-
in the above, P k Represents the posterior estimated covariance corresponding to the current time, I represents the identity matrix, K k Representing the filtering gain in matrix form, H representing the transformation parameters in matrix form, P k- Representing the a priori estimated covariance of the corresponding current time in matrix form. In particular, when k is equal to 0, the posterior estimates the covariance P k The matrix set to all zeros may be initialized.
The prior estimated covariance corresponding to the current moment is updated to obtain the posterior estimated covariance corresponding to the current moment, so that the steps in the embodiment of the disclosure are repeated, the actual state information corresponding to the next moment (i.e., the k+1 moment) can be determined, and the actual state information of the robot corresponding to each moment can be determined in the driving process of the robot in such a cycle, and detailed description is omitted here.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following: the state noise of the robot is determined based on the reference information, so that the state noise is utilized to obtain the actual state information of the robot corresponding to the current moment, and a large number of particles are not required to be utilized for simulation in the state determining process, thereby being beneficial to improving the state determining speed. In addition, the state noise is determined according to measurement state information at a plurality of moments and/or actual running information at the current moment, and the state noise can be measured from an external measurement angle of the robot and/or a state angle of the robot, so that the state noise is more close to the actual situation, and further the accuracy of the subsequently determined actual state information is improved.
In some disclosed embodiments, in order to enable a user to timely perceive abnormal state noise, user experience is improved, and preset prompt can be performed when the state noise does not meet preset noise conditions. The preset prompt may be specifically implemented in at least one of sound, light, and text, for example, playing a prompt voice, or lighting a prompt lamp, or outputting a prompt text, which is not limited herein.
In a specific disclosure implementation scenario, the state noise may include measurement interference noise obtained by using a plurality of measurement state information, and a specific acquisition manner may refer to related steps in the foregoing disclosure embodiment, which is not described herein. In addition, the preset noise condition may include that the measured interference noise is smaller than a first noise threshold, and a specific value of the first noise threshold may be set according to actual situations. If the measured interference noise does not meet the preset noise condition, a first early warning message can be output to prompt that the state measurement is interfered, so that a user can timely perceive when the state measurement is interfered, and user experience is improved.
In another specific disclosure implementation scenario, the state noise may include state transition noise obtained by using actual running information at the current moment, and the specific acquisition mode may refer to the relevant steps in the foregoing disclosure embodiment, which is not described herein again. In addition, the preset noise condition may include that the state transition noise is smaller than a second noise threshold, and a specific value of the second noise threshold may be set according to an actual situation, which is not limited herein. If the state transition noise does not meet the preset noise condition, outputting a second early warning message to prompt the robot that the robot has a body slipping risk, so that a user can timely perceive when the robot has the body slipping risk, and user experience is improved.
The first warning message and the second warning message may be implemented in at least one of sound, light, and text, for example, playing a prompt voice, or lighting a prompt lamp, or outputting a prompt text, which is not limited herein.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an embodiment of a state determining apparatus 30 of a robot according to the present application. The state determining device 30 of the robot specifically includes a measurement state acquiring module 31, a state noise determining module 32, and an actual state acquiring module 33, where the measurement state acquiring module 31 is configured to acquire reference information of the robot, and the reference information includes at least one of the following: measuring state information of the robot corresponding to a plurality of moments and actual running information of the robot corresponding to the current moment; the state noise determining module 32 is configured to determine state noise of the robot based on the reference information; the actual state obtaining module 33 is configured to obtain actual state information of the robot corresponding to the current moment by using pose noise.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following: the state noise of the robot is determined based on the reference information, so that the state noise is utilized to obtain the actual state information of the robot corresponding to the current moment, and a large number of particles are not required to be utilized for simulation in the state determining process, thereby being beneficial to improving the state determining speed. In addition, the state noise is determined according to measurement state information at a plurality of moments and/or actual running information at the current moment, and the state noise can be measured from an external measurement angle of the robot and/or a state angle of the robot, so that the state noise is more close to the actual situation, and further the accuracy of the subsequently determined actual state information is improved.
In some disclosed embodiments, the state noise determination module 32 includes a measurement interference determination sub-module for determining a measurement interference noise of the robot using measurement state information corresponding to a current time and several times before the current time, and the state noise determination module 32 includes a state transition determination sub-module for determining a state transition noise of the robot using actual travel information at the current time.
Different from the embodiment disclosed above, the measurement interference noise of the robot is determined by using the measurement state information corresponding to the current time and a plurality of times before the current time, so that the noise of the robot can be determined from an external measurement angle, and the external interference of the robot in the running process can be measured; the state transition noise of the robot is determined by using the actual running information at the current moment, so that the noise of the robot can be determined from the angle of the state of the robot, and the internal interference of the robot in the running process can be measured.
In some disclosed embodiments, the measurement interference determination submodule includes a discrete acquisition unit for acquiring a degree of dispersion of measurement state information at a current time and a plurality of times before the current time, and the measurement interference determination submodule includes a noise determination unit for determining measurement interference noise using the degree of dispersion.
Different from the above disclosed embodiments, the degree of dispersion of the measurement state information at the present time and several times before the present time is utilized, and the measurement interference noise is determined by utilizing the degree of dispersion, so that the interference of the robot outside in the running process can be accurately measured.
In some disclosed embodiments, the degree of dispersion of the measured state information at the current time and several times before it is the standard deviation of the measured state information at the current time and several times before it.
Unlike the previously disclosed embodiments, by setting the degree of dispersion of the measurement state information at the present time and several times before it as the standard deviation of the measurement state information at the present time and several times before it, it is possible to facilitate reduction in complexity and calculation amount of determining the degree of dispersion, and to facilitate improvement in the speed of state determination.
In some disclosed embodiments, the noise determination unit is specifically configured to take the product between the degree of dispersion and a preset gain parameter as the measured interference noise.
Different from the above disclosed embodiments, the product between the discrete degree and the preset gain parameter is used as the measurement interference noise, which can be beneficial to improving the accuracy of measuring the interference noise and the accuracy of determining the state.
In some disclosed embodiments, the actual driving information includes driving angle information, motor driving information, and driving speed information of the robot, and the state transition determination submodule is specifically configured to obtain state transition noise of the robot by using at least one of the first state noise and the second state noise; the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information.
Unlike the previously disclosed embodiments, the actual driving information is set to include the driving angle information, the motor driving information, and the driving speed information of the robot, so that the state transition noise of the robot is obtained using at least one of the first state noise and the second state noise, the first state noise being determined using the driving angle information and the driving speed information, and the second state noise being determined using the motor driving information and the driving speed information, it is possible to facilitate improvement of the accuracy of the state transition noise.
In some disclosed embodiments, the robot includes driving wheels and steering wheels, the driving wheels are used for driving the robot to travel, the steering wheels are used for changing the traveling direction of the robot, the traveling speed information includes an actual speed difference between the driving wheels of the robot, the traveling angle information includes an actual steering angle of the steering wheels of the robot, the first state noise determining unit includes a first mapping subunit for mapping the actual steering angle by using a first mapping relationship between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle, and the first state noise determining unit includes a first state noise determining subunit for determining the first state noise by using a difference between the actual speed difference and the theoretical speed difference.
Unlike the previously disclosed embodiments, the travel speed information is set to include an actual speed difference between the driving wheels of the robot, and the travel angle information is set to include an actual steering angle of the steering wheel of the robot, so that the actual steering angle is mapped using a first mapping relationship between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle, and the first state noise is determined using a difference between the actual speed difference and the theoretical speed difference, so that the first state noise of the robot can be determined from the angle of the steering wheel of the robot.
In some disclosed embodiments, the robot comprises a driving wheel, the driving wheel is used for driving the robot to run, the running speed information comprises an actual average speed of the driving wheel of the robot, the motor driving information comprises an actual average driving signal value of a motor of the robot, the second state noise determining unit comprises a second mapping subunit for mapping the actual average driving signal value by using a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value, and the second state noise determining unit comprises a second state noise determining subunit for determining the second state noise by using a difference between the actual average speed and the theoretical average speed.
Unlike the previously disclosed embodiments, the travel speed information is set to include the actual average speed of the robot driving wheel, the motor driving information is set to include the actual average driving signal value of the robot motor, so that the actual average driving signal value is mapped using the second mapping relationship between the average speed and the average driving signal value to obtain the theoretical average speed corresponding to the actual average driving signal value, and the second state noise is determined using the difference between the actual average speed and the theoretical average speed, so that the second state noise of the robot can be determined from the angle of the robot driving wheel.
In some disclosed embodiments, the first state noise determination subunit is specifically configured to take as the first state noise the square of the difference between the actual speed difference and the theoretical speed difference, and the second state noise determination subunit is specifically configured to take as the second state noise the square of the difference between the actual average speed and the theoretical average speed.
Unlike the foregoing disclosed embodiments, the square of the difference between the actual speed difference and the theoretical speed difference is taken as the first state noise, and the square of the difference between the actual average speed and the theoretical average speed is taken as the second state noise, so that the complexity and the calculation amount of the calculation of the first state noise and the second state noise can be reduced, and the speed of state determination can be improved.
In some disclosed embodiments, the actual state obtaining module 33 is specifically configured to obtain, using state noise, actual state information of the robot corresponding to the current time, where the obtaining includes: and processing the actual state information of the robot at the previous moment and the measured state information at the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
Different from the foregoing embodiment, the state noise is utilized to process the measurement state information of the robot corresponding to the current time and the actual state information corresponding to the previous time, which is favorable for balancing the measurement state information of the robot at the current time and the actual state information at the previous time, so that the actual state information obtained by determination is corrected relative to the measurement state information, and further, the accuracy of determining the state of the robot can be favorable for improving.
In some disclosed embodiments, the actual state obtaining module 33 is specifically configured to determine a filtering gain based on the state noise, predict, by using kalman filtering of the filtering gain, the actual state information of the robot corresponding to the previous time and the actual running information of the previous time, obtain the predicted state information corresponding to the current time, and fuse the predicted state information of the current time with the measured state information of the current time to obtain the actual state information of the robot corresponding to the current time.
Different from the above disclosed embodiment, the filtering gain is determined based on the state noise, and the actual state information of the robot corresponding to the previous time and the actual running information of the previous time are predicted by using the kalman filtering of the filtering gain, so as to obtain the predicted state information corresponding to the current time, and the predicted state information of the current time and the measured state information of the current time are fused, so that the robustness to the external signal can be enhanced, and the actual state information corresponding to the current time can be accurately determined.
In some disclosed embodiments, the state determining device 30 of the robot further includes a prompt module, configured to perform a preset prompt when the state noise does not meet the preset noise condition.
Different from the embodiment disclosed in the foregoing, when the state noise does not meet the preset noise condition, the preset prompt is performed, so that the user can perceive abnormal state noise, and the user experience is improved.
In some disclosed embodiments, the state noise includes: the measurement interference noise obtained by utilizing the plurality of measurement state information, and the preset noise conditions comprise: the prompt module comprises a first early warning sub-module and a second early warning sub-module, wherein the first early warning sub-module is used for outputting a first early warning message to prompt that state measurement is interfered when the measured interference noise does not meet a preset noise condition; and/or, the state noise includes: state transition noise obtained by utilizing actual running information at the current moment; presetting a noise condition packet: the state transition noise is smaller than a second noise threshold value, and the prompting module comprises a second early warning sub-module and is used for outputting a second early warning message to prompt the robot that the vehicle body is in a skid risk when the state transition noise does not meet a preset noise condition.
Different from the embodiment disclosed in the foregoing, when the measured interference noise does not meet the preset condition, a first early warning message is output to prompt that the state measurement is interfered, so that a user can timely perceive when the state measurement is interfered, and the user experience is improved; when the state transition noise does not meet the preset condition, a second early warning message is output to prompt the robot that the robot has a body slipping risk, so that a user can timely perceive when the robot has the body slipping risk, and user experience is improved.
In some disclosed embodiments, the measurement state acquisition module 31 includes a data acquisition sub-module for acquiring an image of a surrounding environment of the robot to obtain environmental image data corresponding to a current time, the measurement state acquisition module 31 includes a measurement state determination sub-module for determining measurement state information of the robot corresponding to the current time based on the environmental image data of the current time, and the measurement state information and the actual state information each include at least one of: the position of the robot, the pose of the robot, the speed of the robot.
Different from the above disclosed embodiment, the number of environmental image units corresponding to the current time is obtained by performing image acquisition on the surrounding environment of the robot, the measurement state information of the robot corresponding to the current time is determined based on the environmental image data of the current time, and the measurement state information and the actual state information are set to include at least one of the position of the robot, the posture of the robot and the speed of the robot, so that the measurement state information of the robot corresponding to the current time can be rapidly obtained, and further the speed of determining the state of the robot can be improved.
Referring to fig. 4, fig. 4 is a schematic diagram of a frame of an embodiment of a robot 40 according to the present application. The robot 40 comprises a robot body 41 and a memory 42 and a processor 43 arranged on the robot body 41, the memory 42 and the processor 43 being coupled to each other, the processor 43 being adapted to execute program instructions stored in the memory 42 for implementing the steps of any of the above-described state determining method embodiments.
In particular, the processor 43 is adapted to control itself and the memory 42 to implement the steps of any of the state determination method embodiments described above. The processor 43 may also be referred to as a CPU (Central Processing Unit ). The processor 43 may be an integrated circuit chip with signal processing capabilities. The processor 43 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 43 may be commonly implemented by an integrated circuit chip.
According to the scheme, the state noise is determined according to the obtained measurement state information and/or the current actual running information, and the state noise can be measured from the external measurement angle of the robot and/or the state angle of the robot, so that the state noise is more close to the actual situation, and the accuracy of the actual state information determined later is improved.
In some disclosed embodiments, the robot 40 further includes a plurality of wheels provided on the robot body 41, and a motor for driving the wheels to walk and a steering engine for driving the wheels to steer. For example, the robot comprises a first wheel set connected with the motor as a driving wheel and a second wheel set connected with the steering engine as a steering wheel. In addition, in order to obtain the driving information of the robot, the robot 40 may further include a speed measuring assembly, which may be provided on the driving wheel, for obtaining the speed of the driving wheel. In a specific application scenario, the robot 40 comprises 4 wheels, wherein two front wheels are used as steering wheels, two rear wheels are used as driving wheels, and each rear wheel is provided with an encoder to obtain the speed of the corresponding rear wheel. Therefore, the robot can obtain the running speed through the reading encoder and the running angle through the reading steering engine control record. Further, the robot body 41 may be provided in different shapes according to different practical application requirements. For example, for express delivery applications, the robot body 41 may be provided to have the shape of a car, a minibus, or the like; alternatively, for service guidance applications, the robot body 41 may be configured to have a general humanoid shape, a cartoon animal shape, or the like, and may be specifically configured according to actual application requirements, which is not exemplified here.
In some disclosed embodiments, to achieve the acquisition of the measurement state information, the robot 40 may be further provided with an image pickup device, whereby the measurement state information of the robot 40 is determined by capturing the obtained environmental image by the image pickup device.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a frame of an embodiment of a computer readable storage medium 50 according to the present application. The computer readable storage medium 50 stores program instructions 501 executable by a processor, the program instructions 501 for implementing the steps of any of the above-described state determination method embodiments.
According to the scheme, the state noise is determined according to the obtained measurement state information and/or the current actual running information, and the state noise can be measured from the external measurement angle of the robot and/or the state angle of the robot, so that the state noise is more close to the actual situation, and the accuracy of the actual state information determined later is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (15)

1. A method for determining a state of a robot, comprising:
acquiring reference information of the robot; the reference information comprises actual running information of the robot corresponding to the current moment;
determining state noise of the robot based on the reference information;
obtaining actual state information of the robot corresponding to the current moment by using the state noise;
wherein the actual running information comprises running angle information, motor driving information and running speed information of the robot; the determining the state noise of the robot based on the reference information includes:
obtaining state transition noise of the robot by using at least one of the first state noise and the second state noise;
the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information.
2. The method of claim 1, wherein the reference information further comprises: the robot corresponds to measurement state information of a plurality of moments, and the determining state noise of the robot based on the reference information comprises:
And determining measurement interference noise of the robot by using measurement state information corresponding to the current moment and a plurality of moments before the current moment.
3. The method of claim 2, wherein determining the measurement interference noise of the robot using measurement state information corresponding to the current time and a number of times preceding the current time comprises:
acquiring the discrete degree of the measurement state information of the current moment and a plurality of moments before the current moment;
and determining the measurement interference noise by using the discrete degree.
4. A method according to claim 3, characterized in that the degree of dispersion of the measurement state information at the current time instant and several times before it is the standard deviation of the measurement state information at the current time instant and several times before it; and/or
Said determining said measured interference noise using said degree of discretization comprises:
taking the product between the discrete degree and a preset gain parameter as the measured interference noise.
5. The method according to claim 1, characterized in that the robot comprises driving wheels for driving the robot in travel and steering wheels for changing the direction of travel of the robot, the travel speed information comprising the actual speed difference between the driving wheels of the robot, the travel angle information comprising the actual steering angle of the steering wheels of the robot; before the state transition noise of the robot is obtained by using at least one of the first state noise and the second state noise, the method further comprises:
Mapping the actual steering angle by using a first mapping relation between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle;
determining the first state noise using a difference between the actual speed difference and the theoretical speed difference; and/or the number of the groups of groups,
the robot comprises a driving wheel, wherein the driving wheel is used for driving the robot to run, the running speed information comprises the actual average speed of the driving wheel of the robot, and the motor driving information comprises the actual average driving signal value of the motor of the robot; before the state transition noise of the robot is obtained by using at least one of the first state noise and the second state noise, the method further comprises:
mapping the actual average driving signal value by using a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value;
and determining the second state noise by utilizing the difference between the actual average speed and the theoretical average speed.
6. The method of claim 5, wherein said determining said first state noise using a difference between said actual speed difference and said theoretical speed difference comprises:
Taking the square of the difference between the actual speed difference and the theoretical speed difference as the first state noise;
said determining said second state noise using a difference between said actual average speed and said theoretical average speed comprises:
and taking the square of the difference between the actual average speed and the theoretical average speed as the second state noise.
7. The method of claim 1, wherein the reference information further comprises: the measuring state information of the robot corresponding to a plurality of moments, and the obtaining the actual state information of the robot corresponding to the current moment by using the state noise comprises the following steps:
and processing the actual state information of the robot at the previous moment and the measured state information of the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
8. The method according to claim 7, wherein the processing the actual state information of the robot at the previous time and the measured state information of the current time by using the state noise to obtain the actual state information of the robot at the current time includes:
Determining a filter gain based on the state noise;
predicting the actual state information of the robot corresponding to the previous moment and the actual running information of the previous moment by using Kalman filtering of the filtering gain to obtain the predicted state information corresponding to the current moment;
and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
9. The method of claim 1, wherein the reference information further comprises: the state noise comprises state transition noise and measurement interference noise, and the obtaining actual state information of the robot corresponding to the current time by using the state noise comprises the following steps:
processing posterior estimation covariance corresponding to the previous moment by using the state transition parameters of the robot and the state transition noise to obtain prior estimation covariance corresponding to the current moment;
processing the prior estimated covariance corresponding to the current moment by utilizing the conversion parameters between the state information and the measurement interference noise to obtain the filtering gain corresponding to the current moment;
The state transition parameters and the input state transition parameters of the robot are utilized to respectively process the actual state information of the robot corresponding to the previous moment and the actual running information of the robot at the previous moment, so as to obtain the predicted state information corresponding to the current moment;
and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
10. The method according to claim 9, wherein after said fusing the predicted state information of the current time with the measured state information of the current time to obtain the actual state information of the robot corresponding to the current time, the method further comprises:
updating the prior estimated covariance corresponding to the current moment by using the filtering gain and the transformation parameter to obtain the posterior estimated covariance corresponding to the current moment, and re-executing the state transition parameter and the state transition noise of the robot, and processing the posterior estimated covariance corresponding to the previous moment to obtain the prior estimated covariance corresponding to the current moment and the subsequent steps to determine the actual state information corresponding to the next moment.
11. The method of claim 1, wherein after the determining the state noise of the robot based on the reference information, the method further comprises:
and if the state noise does not meet the preset noise condition, carrying out preset prompt.
12. The method of claim 1, wherein the reference information further comprises: the robot corresponding to measurement state information of a plurality of moments, the acquiring the reference information of the robot comprises:
image acquisition is carried out on the surrounding environment of the robot, so that environment image data corresponding to the current moment is obtained;
determining measurement state information of the robot corresponding to the current moment based on the environmental image data of the current moment;
the measured state information and actual state information each include at least one of: the position of the robot, the pose of the robot, the speed of the robot.
13. A state determining apparatus of a robot, comprising:
the measurement state acquisition module is used for acquiring the reference information of the robot; the reference information comprises actual running information of the robot corresponding to the current moment;
A state noise determining module, configured to determine state noise of the robot based on the reference information;
the actual state acquisition module is used for acquiring actual state information of the robot corresponding to the current moment by utilizing the state noise;
the state noise determining module comprises a state transition determining sub-module, and the actual running information comprises running angle information, motor driving information and running speed information of the robot; the state transition determining submodule is used for obtaining state transition noise of the robot by utilizing at least one of first state noise and second state noise; the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information.
14. A robot comprising a robot body and a memory and a processor arranged on the robot body, the processor and the memory being coupled to each other, the processor being adapted to execute program instructions stored in the memory for implementing the state determination method of any one of claims 1 to 12.
15. A computer readable storage medium having stored thereon program instructions, which when executed by a processor implement the state determination method of any of claims 1 to 12.
CN202111515984.3A 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium Active CN114260890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111515984.3A CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010872662.3A CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111515984.3A CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010872662.3A Division CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Publications (2)

Publication Number Publication Date
CN114260890A CN114260890A (en) 2022-04-01
CN114260890B true CN114260890B (en) 2023-11-03

Family

ID=73579964

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202111515984.3A Active CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202010872662.3A Active CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111475203.2A Active CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202010872662.3A Active CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111475203.2A Active CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Country Status (4)

Country Link
JP (1) JP2022550231A (en)
KR (3) KR102412066B1 (en)
CN (3) CN114260890B (en)
WO (1) WO2022041797A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114260890B (en) * 2020-08-26 2023-11-03 北京市商汤科技开发有限公司 Method and device for determining state of robot, robot and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03122541A (en) * 1989-10-04 1991-05-24 Nissan Motor Co Ltd Presuming apparatus of quantity of state of vehicle
JP2002331478A (en) * 2001-05-02 2002-11-19 Yaskawa Electric Corp Operating speed determining method for robot
CN102862666A (en) * 2011-07-08 2013-01-09 中国科学院沈阳自动化研究所 Underwater robot state and parameter joint estimation method based on self-adaption unscented Kalman filtering (UKF)
CN106156790A (en) * 2016-06-08 2016-11-23 北京工业大学 A kind of distributed collaborative algorithm being applied to sensor network and data syncretizing mechanism
KR20170068234A (en) * 2015-12-09 2017-06-19 세종대학교산학협력단 Bias correcting apparatus for yaw angle estimation of mobile robots and method thereof
CN106956282A (en) * 2017-05-18 2017-07-18 广州视源电子科技股份有限公司 Angular acceleration determines method, device, robot and storage medium
CN108128308A (en) * 2017-12-27 2018-06-08 长沙理工大学 A kind of vehicle state estimation system and method for distributed-driving electric automobile
CN108621161A (en) * 2018-05-08 2018-10-09 中国人民解放军国防科技大学 Method for estimating body state of foot type robot based on multi-sensor information fusion
CN109813307A (en) * 2019-02-26 2019-05-28 大连海事大学 A kind of navigation system and its design method of unmanned boat Fusion
CN110361003A (en) * 2018-04-09 2019-10-22 中南大学 Information fusion method, device, computer equipment and computer readable storage medium
CN110422175A (en) * 2019-07-31 2019-11-08 上海智驾汽车科技有限公司 Vehicle state estimation method and device, electronic equipment, storage medium, vehicle

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101234797B1 (en) * 2006-04-04 2013-02-20 삼성전자주식회사 Robot and method for localization of the robot using calculated covariance
KR100877071B1 (en) * 2007-07-18 2009-01-07 삼성전자주식회사 Method and apparatus of pose estimation in a mobile robot based on particle filter
KR101409990B1 (en) * 2007-09-14 2014-06-23 삼성전자주식회사 Apparatus and method for calculating position of robot
KR101038581B1 (en) * 2008-10-31 2011-06-03 한국전력공사 Method, system, and operation method for providing surveillance to power plant facilities using track-type mobile robot system
KR101086364B1 (en) * 2009-03-20 2011-11-23 삼성중공업 주식회사 Robot parameter estimation method using Kalman filter
JP5803155B2 (en) * 2011-03-04 2015-11-04 セイコーエプソン株式会社 Robot position detection device and robot system
KR101390776B1 (en) * 2013-03-14 2014-04-30 인하대학교 산학협력단 Localization device, method and robot using fuzzy extended kalman filter algorithm
KR102009481B1 (en) * 2013-12-26 2019-08-09 한화디펜스 주식회사 Apparatus and method for controllling travel of vehicle
US9517561B2 (en) * 2014-08-25 2016-12-13 Google Inc. Natural pitch and roll
JP6541026B2 (en) * 2015-05-13 2019-07-10 株式会社Ihi Apparatus and method for updating state data
JP6770393B2 (en) * 2016-10-04 2020-10-14 株式会社豊田中央研究所 Tracking device and program
KR20180068102A (en) * 2016-12-13 2018-06-21 주식회사 큐엔티 Method and server for providing robot fault monitoring prognostic service
CN107644441A (en) * 2017-08-30 2018-01-30 南京大学 Multi-foot robot complex road condition based on three-dimensional imaging is separated into point methods of stopping over
CN107748562A (en) * 2017-09-30 2018-03-02 湖南应用技术学院 A kind of comprehensive service robot
CN109959381B (en) * 2017-12-22 2021-06-04 深圳市优必选科技有限公司 Positioning method, positioning device, robot and computer readable storage medium
CN108710295B (en) * 2018-04-20 2021-06-18 浙江工业大学 Robot following method based on progressive volume information filtering
CN108896049A (en) * 2018-06-01 2018-11-27 重庆锐纳达自动化技术有限公司 A kind of motion positions method in robot chamber
CN108645415A (en) * 2018-08-03 2018-10-12 上海海事大学 A kind of ship track prediction technique
CN109443356A (en) * 2019-01-07 2019-03-08 大连海事大学 A kind of the unmanned boat Position And Velocity estimation structure and design method of the noise containing measurement
CN110861123A (en) * 2019-11-14 2020-03-06 华南智能机器人创新研究院 Method and device for visually monitoring and evaluating running state of robot
CN111044053B (en) * 2019-12-31 2022-04-01 三一重工股份有限公司 Navigation method and device of single-steering-wheel unmanned vehicle and single-steering-wheel unmanned vehicle
CN111136660B (en) * 2020-02-19 2021-08-03 清华大学深圳国际研究生院 Robot pose positioning method and system
CN114260890B (en) * 2020-08-26 2023-11-03 北京市商汤科技开发有限公司 Method and device for determining state of robot, robot and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03122541A (en) * 1989-10-04 1991-05-24 Nissan Motor Co Ltd Presuming apparatus of quantity of state of vehicle
JP2002331478A (en) * 2001-05-02 2002-11-19 Yaskawa Electric Corp Operating speed determining method for robot
CN102862666A (en) * 2011-07-08 2013-01-09 中国科学院沈阳自动化研究所 Underwater robot state and parameter joint estimation method based on self-adaption unscented Kalman filtering (UKF)
KR20170068234A (en) * 2015-12-09 2017-06-19 세종대학교산학협력단 Bias correcting apparatus for yaw angle estimation of mobile robots and method thereof
CN106156790A (en) * 2016-06-08 2016-11-23 北京工业大学 A kind of distributed collaborative algorithm being applied to sensor network and data syncretizing mechanism
CN106956282A (en) * 2017-05-18 2017-07-18 广州视源电子科技股份有限公司 Angular acceleration determines method, device, robot and storage medium
CN108128308A (en) * 2017-12-27 2018-06-08 长沙理工大学 A kind of vehicle state estimation system and method for distributed-driving electric automobile
CN110361003A (en) * 2018-04-09 2019-10-22 中南大学 Information fusion method, device, computer equipment and computer readable storage medium
CN108621161A (en) * 2018-05-08 2018-10-09 中国人民解放军国防科技大学 Method for estimating body state of foot type robot based on multi-sensor information fusion
CN109813307A (en) * 2019-02-26 2019-05-28 大连海事大学 A kind of navigation system and its design method of unmanned boat Fusion
CN110422175A (en) * 2019-07-31 2019-11-08 上海智驾汽车科技有限公司 Vehicle state estimation method and device, electronic equipment, storage medium, vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自适应EKF的移动机器人轨迹跟踪控制器;于晨;冯显英;;组合机床与自动化加工技术(03);全文 *

Also Published As

Publication number Publication date
CN112025706A (en) 2020-12-04
KR20220084434A (en) 2022-06-21
WO2022041797A1 (en) 2022-03-03
KR102412066B1 (en) 2022-06-22
KR20220084435A (en) 2022-06-21
JP2022550231A (en) 2022-12-01
CN112025706B (en) 2022-01-04
CN114260890A (en) 2022-04-01
CN114131604A (en) 2022-03-04
KR20220027832A (en) 2022-03-08
CN114131604B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
US11783593B2 (en) Monocular depth supervision from 3D bounding boxes
US11741720B2 (en) System and method for tracking objects using using expanded bounding box factors
US11403854B2 (en) Operating assistance method, control unit, operating assistance system and working device
CN114260890B (en) Method and device for determining state of robot, robot and storage medium
CN116580373B (en) Lane line optimization method and device, electronic equipment and storage medium
EP3441941A1 (en) Camera motion estimation device, camera motion estimation method, and computer-readable medium
WO2020003764A1 (en) Image processing device, moving apparatus, method, and program
CN113325415B (en) Fusion method and system of vehicle radar data and camera data
CN115115530A (en) Image deblurring method, device, terminal equipment and medium
US11594040B2 (en) Multiple resolution deep neural networks for vehicle autonomous driving systems
CN117475397B (en) Target annotation data acquisition method, medium and device based on multi-mode sensor
CN113034595B (en) Method for visual localization and related device, apparatus, storage medium
US11682140B1 (en) Methods and apparatus for calibrating stereo cameras using a time-of-flight sensor
US11983936B2 (en) Data collection device, vehicle control device, data collection system, data collection method, and storage medium
RU2782521C1 (en) Method and system for planning of transverse trajectory of automatic change of car lane, car, and data carrier
WO2021230314A1 (en) Measurement system, vehicle, measurement device, measurement program, and measurement method
WO2021199286A1 (en) Object tracking device, object tracking method, and recording medium
JP2021190025A (en) Information processing device
CN116793345A (en) Posture estimation method and device of self-mobile equipment and readable storage medium
CN115880763A (en) Operator takeover prediction
KR20210098875A (en) Method for tracking surrounding vehicles through shape model-based LIDAR/RADAR information fusion
CN117911979A (en) Data synchronization method, device, equipment and storage medium
CN115683093A (en) Robot, robot skid processing method, device and readable storage medium
CN116051767A (en) Three-dimensional map construction method and related equipment
JP2021033758A (en) Moving state estimation system, and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant