CN112025706B - Method and device for determining state of robot, robot and storage medium - Google Patents

Method and device for determining state of robot, robot and storage medium Download PDF

Info

Publication number
CN112025706B
CN112025706B CN202010872662.3A CN202010872662A CN112025706B CN 112025706 B CN112025706 B CN 112025706B CN 202010872662 A CN202010872662 A CN 202010872662A CN 112025706 B CN112025706 B CN 112025706B
Authority
CN
China
Prior art keywords
robot
state
noise
information
actual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010872662.3A
Other languages
Chinese (zh)
Other versions
CN112025706A (en
Inventor
姚达琛
何悦
李�诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010872662.3A priority Critical patent/CN112025706B/en
Priority to CN202111515984.3A priority patent/CN114260890B/en
Priority to CN202111475203.2A priority patent/CN114131604B/en
Publication of CN112025706A publication Critical patent/CN112025706A/en
Priority to KR1020217039198A priority patent/KR102412066B1/en
Priority to KR1020227019722A priority patent/KR20220084434A/en
Priority to PCT/CN2021/088224 priority patent/WO2022041797A1/en
Priority to JP2021566210A priority patent/JP2022550231A/en
Priority to KR1020227019723A priority patent/KR20220084435A/en
Application granted granted Critical
Publication of CN112025706B publication Critical patent/CN112025706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a state determining method and device for a robot, the robot and a storage medium, wherein the state determining method for the robot comprises the following steps: acquiring reference information of the robot; wherein the reference information includes at least one of: the robot corresponds to the measurement state information of a plurality of moments and corresponds to the actual running information of the current moment; determining state noise of the robot based on the reference information; and obtaining the actual state information of the robot corresponding to the current moment by using the state noise. According to the scheme, the accuracy of robot state determination can be improved.

Description

Method and device for determining state of robot, robot and storage medium
Technical Field
The present disclosure relates to the field of robot technologies, and in particular, to a method and an apparatus for determining a state of a robot, and a storage medium.
Background
With the development of electronic technology and computer technology, robots are applied to express delivery, service guidance, hotel meal delivery and the like, and gradually receive wide attention.
However, during the running process of the robot, the robot is inevitably interfered, for example, white noise widely existing in a free space, even an interference signal, thereby affecting the normal running of the robot, and in a serious case, the robot even has the phenomena of runaway, skid and the like. In view of the above, how to improve the accuracy of determining the robot state is an urgent problem to be solved.
Disclosure of Invention
The application provides a robot state determination method and device, a robot and a storage medium.
The application provides a state determination method for a robot in a first aspect, comprising the following steps: acquiring reference information of the robot; wherein the reference information includes at least one of: the robot corresponds to the measurement state information of a plurality of moments and corresponds to the actual running information of the current moment; determining state noise of the robot based on the reference information; and obtaining the actual state information of the robot corresponding to the current moment by using the state noise.
Therefore, by acquiring reference information of the robot, the reference information includes at least one of: the robot corresponds to the measured state information of a plurality of moments and the actual driving information of the robot corresponding to the current moment, and the state noise of the robot is determined based on the reference information, so that the actual state information of the robot corresponding to the current moment is obtained by using the state noise, and further, in the process of determining the state, simulation can be performed without using a large number of particles, and the speed of state determination is improved. In addition, the state noise is determined according to the measured state information at a plurality of moments and/or the actual driving information at the current moment, and the noise can be measured from the external measuring angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual condition, and the accuracy of the subsequently determined actual state information is improved.
Wherein determining the state noise of the robot based on the reference information comprises: determining the measurement interference noise of the robot by using the measurement state information corresponding to the current moment and a plurality of moments before the current moment; and/or determining the state transition noise of the robot by using the actual running information at the current moment.
Therefore, the measuring interference noise of the robot is determined by using the measuring state information corresponding to the current moment and a plurality of moments before the current moment, so that the noise of the robot can be determined from an external measuring angle, and the external interference of the robot in the driving process can be measured; the state transition noise of the robot is determined by using the actual driving information at the current moment, so that the noise of the robot can be determined from the perspective of the self state of the robot, and the internal interference of the robot in the driving process can be measured.
Wherein, the step of determining the measurement interference noise of the robot by using the measurement state information corresponding to the current moment and a plurality of moments before the current moment comprises the following steps: obtaining the discrete degree of the measurement state information at the current moment and a plurality of moments before the current moment; using the degree of dispersion, the measurement interference noise is determined.
Therefore, the discrete degree of the measurement state information at the current moment and a plurality of moments before the current moment is utilized, and the measurement interference noise is determined by utilizing the discrete degree, so that the external interference of the robot in the running process can be accurately measured.
The dispersion degree of the measurement state information at the current moment and a plurality of moments before the current moment is the standard deviation of the measurement state information at the current moment and a plurality of moments before the current moment; and/or, using the degree of dispersion, determining a measurement interference noise, comprising: and taking the product of the dispersion degree and the preset gain parameter as the measurement interference noise.
Therefore, the discrete degree of the measurement state information at the current moment and a plurality of moments before the current moment is set as the standard deviation of the measurement state information at the current moment and a plurality of moments before the current moment, so that the complexity and the calculation amount for determining the discrete degree can be reduced, and the speed for determining the state can be improved; the product of the dispersion degree and the preset gain parameter is used as the measurement interference noise, so that the accuracy of the measurement interference noise can be improved, and the accuracy of state determination can be improved.
The actual running information comprises running angle information, motor driving information and running speed information of the robot; determining the state transition noise of the robot using the actual travel information at the current time includes: obtaining state transition noise of the robot by using at least one of the first state noise and the second state noise; wherein the first state noise is determined using the travel angle information and the travel speed information, and the second state noise is determined using the motor drive information and the travel speed information.
Therefore, the actual travel information is set to include the travel angle information, the motor drive information, and the travel speed information of the robot, so that the state transition noise of the robot is obtained using at least one of the first state noise determined using the travel angle information and the travel speed information and the second state noise determined using the motor drive information and the travel speed information, which can be advantageous for improving the accuracy of the state transition noise.
The robot comprises driving wheels and steering wheels, wherein the driving wheels are used for driving the robot to run, the steering wheels are used for changing the running direction of the robot, the running speed information comprises the actual speed difference between the driving wheels of the robot, and the running angle information comprises the actual steering angle of the steering wheels of the robot; before obtaining the state transition noise of the robot by using at least one of the first state noise and the second state noise, the method further includes: mapping the actual steering angle by using a first mapping relation between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle; determining a first state noise using a difference between the actual speed difference and the theoretical speed difference; and/or the robot comprises a driving wheel, the driving wheel is used for driving the robot to run, the running speed information comprises the actual average speed of the driving wheel of the robot, and the motor driving information comprises the actual average driving signal value of the motor of the robot; before obtaining the state transition noise of the robot by using at least one of the first state noise and the second state noise, the method further includes: mapping the actual average driving signal value by using a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value; the second state noise is determined using the difference between the actual average velocity and the theoretical average velocity.
Therefore, the robot comprises a driving wheel and a steering wheel, the driving wheel is used for driving the robot to run, the steering wheel is used for changing the running direction of the robot, the running speed information is set to comprise an actual speed difference between the driving wheels of the robot, the running angle information is set to comprise an actual steering angle of the steering wheel of the robot, so that the actual steering angle is mapped by using a first mapping relation between the speed difference and the steering angle, a theoretical speed difference corresponding to the actual steering angle is obtained, and a first state noise is determined by using the difference between the actual speed difference and the theoretical speed difference, so that the first state noise of the robot can be determined from the angle of the steering wheel of the robot; the robot includes a driving wheel for driving the robot to travel, and sets travel speed information to include an actual average speed of the driving wheel of the robot, and sets motor driving information to include an actual average driving signal value of a motor of the robot, so that the actual average driving signal value is mapped using a second mapping relationship between the average speed and the average driving signal value, a theoretical average speed corresponding to the actual average driving signal value is obtained, and a second state noise is determined using a difference between the actual average speed and the theoretical average speed, so that the second state noise of the robot can be determined from the perspective of the driving wheel of the robot.
Wherein determining the first state noise using the difference between the actual speed difference and the theoretical speed difference comprises: taking the square of the difference between the actual speed difference and the theoretical speed difference as the first state noise; determining a second state noise using a difference between the actual average velocity and the theoretical average velocity, comprising: the square of the difference between the actual average velocity and the theoretical average velocity is taken as the second-state noise.
Therefore, the square of the difference between the actual speed difference and the theoretical speed difference is taken as the first state noise, and the square of the difference between the actual average speed and the theoretical average speed is taken as the second state noise, so that the complexity and the calculation amount of the calculation of the first state noise and the second state noise can be reduced, and the speed of state determination can be favorably improved.
Wherein, the obtaining of the actual state information of the robot corresponding to the current moment by using the state noise comprises: and processing the actual state information of the robot at the previous moment and the measured state information of the robot at the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
Therefore, by processing the measurement state information corresponding to the current time and the actual state information corresponding to the previous time by using the state noise, the robot can balance the measurement state information at the current time and the actual state information at the previous time, so that the determined actual state information is corrected relative to the measurement state information, and the accuracy of determining the state of the robot can be improved.
The method for processing the actual state information of the robot at the previous moment and the measurement state information of the robot at the current moment by using the state noise comprises the following steps: and determining a filter gain based on the state noise, predicting the actual state information of the robot corresponding to the previous moment and the actual driving information of the robot corresponding to the previous moment by using Kalman filtering of the filter gain to obtain predicted state information corresponding to the current moment, and fusing the predicted state information of the current moment and the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
Therefore, the filter gain is determined based on the state noise, the actual state information of the robot corresponding to the previous time and the actual driving information of the robot corresponding to the previous time are predicted by using the Kalman filtering of the filter gain to obtain the predicted state information corresponding to the current time, and the predicted state information of the current time and the measured state information of the current time are fused, so that the robustness of an external signal can be enhanced, and the actual state information corresponding to the current time can be accurately determined.
Wherein after determining the state noise of the robot based on the reference information, the method further comprises: and if the state noise does not meet the preset noise condition, performing preset prompt.
Therefore, when the state noise does not meet the preset noise condition, the preset prompt is carried out, so that the user can sense the abnormal state noise, and the user experience is improved.
Wherein the state noise comprises: the method comprises the following steps of measuring interference noise by using measurement state information at the current moment and a plurality of moments before the current moment, wherein the preset noise condition comprises the following steps: measuring that interference noise is smaller than a first noise threshold, if state noise does not meet a preset noise condition, performing preset prompting including: if the measured interference noise does not meet the preset noise condition, outputting a first early warning message to prompt that the state measurement is interfered; and/or, the state noise comprises: the state transition noise obtained by using the actual driving information at the current moment is preset under the noise conditions that: the state transition noise is less than a second noise threshold; if the state noise does not meet the preset noise condition, the preset prompting comprises the following steps: and if the state transition noise does not meet the preset noise condition, outputting a second early warning message to prompt that the robot has the risk of vehicle body slip.
Therefore, when the measured interference noise does not meet the preset condition, the first early warning message is output to prompt that the state measurement is interfered, so that a user can sense the state measurement in time when the state measurement is interfered, and the user experience is improved; when the state transition noise does not meet the preset condition, a second early warning message is output to prompt that the robot has the risk of vehicle body slip, so that a user can timely sense when the robot has the risk of vehicle body slip, and the user experience is improved.
Wherein, reference information of the robot is obtained, including: acquiring images of the surrounding environment of the robot to obtain environment image data corresponding to the current moment; determining the measurement state information of the robot corresponding to the current moment based on the environmental image data of the current moment; the measurement status information and the actual status information each include at least one of: the position of the robot, the attitude of the robot, the speed of the robot.
Therefore, the image acquisition is carried out on the peripheral environment of the robot to obtain the number of the environment image users corresponding to the current moment, the measurement state information corresponding to the current moment of the robot is determined based on the environment image data of the current moment, and the measurement state information and the actual state information are set to at least one of the position of the robot, the posture of the robot and the speed of the robot, so that the measurement state information corresponding to the current moment of the robot can be quickly obtained, and the speed of determining the state of the robot can be favorably improved.
A second aspect of the present application provides a state determination device for a robot, including: the robot comprises a measurement state acquisition module, a state noise determination module and an actual state acquisition module, wherein the measurement state acquisition module is used for acquiring reference information of the robot; wherein the reference information includes at least one of: the robot corresponds to the measurement state information at a plurality of moments, and the robot corresponds to the actual driving information at the current moment; the state noise determining module is used for determining the state noise of the robot based on the reference information; the actual state acquisition module is used for acquiring actual state information of the robot corresponding to the current moment by using the pose noise.
A third aspect of the present application provides a robot, comprising a robot body, and a memory and a processor arranged on the robot body, wherein the memory and the processor are coupled to each other, and the processor is configured to execute program instructions stored in the memory, so as to implement the state determination method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the state determination method of the first aspect described above.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following information: the robot corresponds to the measured state information of a plurality of moments and the actual driving information of the robot corresponding to the current moment, and the state noise of the robot is determined based on the reference information, so that the actual state information of the robot corresponding to the current moment is obtained by using the state noise, and further, in the process of determining the state, simulation can be performed without using a large number of particles, and the speed of state determination is improved. In addition, the state noise is determined according to a plurality of acquired measurement state information and/or current actual driving information, and the noise can be measured from the external measurement angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual situation, and the accuracy of the subsequently determined actual state information is improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for determining a state of a robot according to the present application;
FIG. 2 is a schematic flow chart of determining actual state information of a robot by using Kalman filtering;
FIG. 3 is a block diagram of an embodiment of a robot condition determining apparatus according to the present application;
FIG. 4 is a schematic diagram of a frame of an embodiment of the robot of the present application;
FIG. 5 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for determining a state of a robot according to an embodiment of the present disclosure. Specifically, the method may include the steps of:
step S11: and acquiring reference information of the robot.
In the embodiment of the present disclosure, the reference information of the robot may include at least one of: the robot corresponds to the measured state information of a plurality of moments and the actual running information of the robot corresponding to the current moment.
It should be noted that the state of the robot may change at different times, for example, the robot moves at the current time relative to the previous time, and of course, in other application scenarios, the state of the robot may not change, and may be specifically determined according to the actual operation condition of the robot. In this regard, the robot needs to determine its actual state at different times in order to perform subsequent operations.
In the embodiment of the present disclosure, in order to determine the actual state information of the robot corresponding to the current time, the measurement state information corresponding to the current time may be obtained first, so that based on the steps in the embodiment of the present disclosure, the actual state information corresponding to the current time is obtained by using the measurement state information corresponding to the current time. It is to be understood that the information described herein for a time is not necessarily obtained at that time, and may be obtained around that time. For example, the measurement state information corresponding to the current time may be acquired at the current time, and in consideration of the communication delay, the measurement state information corresponding to the current time may be acquired at a time slightly before the current time (for example, 0.5 seconds before, 1 second before, etc.), which is not limited herein.
In a public implementation scene, the measurement state information is obtained by performing state measurement on the robot, in a public implementation scene, the surrounding environment of the robot can be collected to obtain environment image data corresponding to the current moment, and the measurement state information corresponding to the current moment of the robot is determined based on the environment image data at the current moment. For example, the image of the environment around the robot can be captured by a camera device installed in the robot driving environment; alternatively, the surrounding environment may be captured by an imaging device mounted on the robot, which is not limited herein. Specifically, in order to accurately describe the state of the robot, the measured state information and the actual state information of the robot may include: at least one of a position of the robot, a state of the robot, and a speed of the robot. For example, the position of the robot may include position coordinates (e.g., latitude and longitude) where the robot is specifically located, and the state of the robot may include a robot traveling state (e.g., acceleration). Taking the example that the measurement state information includes the position of the robot and the speed of the robot, for convenience of description, the measurement state information corresponding to the current time may be represented as:
Figure BDA0002651617690000081
in the above formula, zkAnd the measurement state information of the robot corresponding to the current time k is shown, p shows the position of the robot in the measurement state information, and v shows the speed of the robot in the measurement state information.
Similarly, taking the example that the actual state information includes the position of the robot and the velocity of the robot, for convenience of description, the actual state information corresponding to the current time may be expressed as:
Figure BDA0002651617690000082
in the above formula, the first and second carbon atoms are,
Figure BDA0002651617690000083
and the actual state information of the robot corresponding to the current time k is shown, p 'shows the position of the robot in the actual state information, and v' shows the speed of the robot in the actual state information.
In addition, the measurement state information of the robot corresponding to several times may specifically include measurement state information of the robot corresponding to the current time and several times before the current time. Taking the current time as the time k as an example, a plurality of times before the current time can be specifically expressed as n times before the time k, and the value of n can be set according to the actual application needs. For example, n may be 5, 10, 15, etc., and is not limited herein.
In another disclosed implementation scenario, the actual travel information may include travel angle information, motor driving information, and travel speed information of the robot. Specifically, the driving angle information can be acquired from a steering engine control record of the robot, the robot can comprise a steering wheel, the steering engine of the robot is used for driving the steering wheel of the robot to turn at a certain angle, the motor driving information can be acquired from a motor control record of the robot, the robot can further comprise a driving wheel, a motor of the robot is used for driving the driving wheel of the robot to move at a certain speed, and the driving speed information can be acquired from a robot encoder.
Step S12: based on the reference information, a state noise of the robot is determined.
The state noise of the robot represents noise that affects the state of the robot during travel, for example, state transition noise generated when the robot transitions from one state to another state; alternatively, the robot may receive measurement interference noise generated during the process of measuring the state information, which is not limited herein.
In one disclosed implementation scenario, measurement interference noise of the robot may be determined using measurement status information corresponding to the current time and several times before the current time. Several moments before the current moment may refer to the foregoing description, and are not described herein again. . Therefore, the noise of the robot can be determined from the external measurement angle, so that the external interference of the robot in the driving process can be measured.
In a specific implementation scenario disclosed herein, the dispersion degree of the measurement status information at the current time and several times before the current time can be obtained, and the measurement interference noise is determined by using the dispersion degree. Specifically, the dispersion degree of the measurement state information at the current time and several times before the current time may be a standard deviation of the measurement state information at the current time and several times before the current time, or the dispersion degree of the measurement state information at the current time and several times before the current time may also be a variance of the measurement state information at the current time and several times before the current time, which is not limited herein. Therefore, the complexity and the calculation amount for determining the discrete degree can be reduced, and the speed of state determination can be improved.
In another specific disclosed implementation scenario, the product of the degree of dispersion and a preset gain parameter may also be used as the measured interference noise. The preset gain parameter may be set in an actual situation, and is not limited herein. Specifically, the measurement interference noise may be expressed as:
R=KRσ(zk-n:k)
in the above formula, R represents the measurement interference noise, zk-n:kIndicates the measurement status information, σ (z), corresponding to the current time k and n times before the current time kk-n:k) Indicating the standard deviation of the measurement status information corresponding to the current time K and n times before it, KRThe preset gain parameter is expressed, and specifically, the preset gain parameter may be a value greater than 0, such as 0.5, 1, 1.5, etc., which is not limited herein.
In another disclosed implementation scenario, the actual travel information at the current time may be utilized to determine the state transition noise of the robot. Taking the current moment as the moment k as an example, the state transition noise of the robot can be determined by using the actual driving information at the moment k, so that the noise of the robot can be determined from the perspective of the self state of the robot, and the internal interference of the robot in the driving process can be measured. Specifically, the state transition noise of the robot may be obtained as at least one of a first state noise obtained using the travel angle information and the acceleration information, and a second state noise determined using the motor drive information and the travel speed information.
In a specific implementation scenario, the first state noise of the robot may be determined by considering from a steering engine perspective and using the driving angle information and the driving speed information, so that the state transition noise of the robot is determined by using the first state noise. For example, it is possible to determine the first state noise of the robot using only the travel angle information and the travel speed information, and to take the first state noise as the state transition noise of the robot.
In another specific implementation scenario, from the perspective of the motor, the second state noise of the robot may be determined using the motor driving information and the traveling speed information, and thus the state transition noise of the robot may be determined using the second state. For example, the second state noise of the robot may be determined using only the motor drive information and the travel speed information, and the second state noise may be used as the state transition noise of the robot.
In a further specific implementation scenario, the first state noise of the robot can be determined by using the driving angle information and the driving speed information, and the second state noise of the robot can be determined by using the motor driving information and the driving speed information, so that the state transition noise of the robot can be obtained by using the first state noise and the second state noise, and the steering engine angle and the motor angle can be considered at the same time, which is beneficial to improving the accuracy of the state transition noise.
Specifically, when the state transition noise is obtained by using the first state noise and the second state noise, the state transition noise may be obtained by performing weighting processing on the first state noise and the second state noise. In addition, the weights corresponding to the first state noise and the second state noise may be set according to actual conditions, for example, when the first state noise is important relative to the second state noise, the weight corresponding to the first state noise may be set to be greater than the weight of the second state noise; alternatively, when the second state noise is important relative to the first state noise, the weight corresponding to the second state noise may be set to be greater than the weight corresponding to the first state noise, and in addition, the weight corresponding to the first state noise may also be set to be equal to the weight corresponding to the second state noise, for example, the weight corresponding to the first state noise is set to be 0.5, and the weight corresponding to the second state noise is also set to be 0.5.
Specifically, the robot may include a driving wheel and a steering wheel, the driving wheel is used for driving the robot to run, the steering wheel is used for changing the running direction of the robot, and then the running speed information may include an actual speed difference between the driving wheels of the robot, for example, the robot includes two driving wheels, and then the speed difference between the two driving wheels is the actual speed difference, and for convenience of description, the actual speed difference may be represented as ewThe driving angle information may include an actual steering angle of the steering wheel of the robot, which may be represented as α for convenience of description, and a first mapping relationship between the speed difference and the steering angle (which may be represented as f for convenience of description) may be used1) The actual steering angle α is mapped to obtain a theoretical speed difference corresponding to the actual steering angle α (for convenience of description, the theoretical speed difference may be represented as f)1(α)), so that the actual speed difference e can be utilizedwAnd the theoretical speed difference f1(α) difference between them, determining the first state noise, e.g. the actual speed difference e may be determinedwAnd the theoretical speed difference f1The square of the difference between (α) is taken as the first state noise. The first mapping relationship may be obtained by performing statistical analysis on a plurality of pairs of speed differences and steering angles acquired in advance, for example, in a normal running process of the robot, acquiring M pairs of speed differences and steering angles, and fitting the acquired M pairs of speed differences and steering angles to obtain the first mapping relationship between the speed differences and the steering angles, where a specific value of M may be set according to an actual situation, and is not limited herein.
In particular, the travel speed information may also include the actual average speed of the driving wheels, i.e. the average speed of the driving wheels of the robot. For example, if the robot includes two driving wheels, the average of the speeds of the two driving wheels is the actual average speed, and for convenience of description, the actual average speed may be set as the actual average speedThe mean velocity is denoted by vwThe motor driving information may include an actual average driving signal value of the robot motor, i.e. a signal average value of the motor corresponding to each driving wheel of the robot, for example, the robot includes two driving wheels, when the driving signal is a pulse width modulation signal, the actual average driving signal value may be the pulse width modulation signal average value of the motor corresponding to the two driving wheels, and for convenience of description, the average driving signal value may be represented as pwA second mapping between the average speed and the average drive signal value may be utilized (for ease of description, the second mapping may be denoted as f2) The actual average drive signal value is mapped to obtain a theoretical average velocity corresponding to the actual average drive signal value (for convenience of description, the theoretical average velocity may be denoted as f)2(pw) May be used) so that the difference between the actual average velocity and the theoretical average velocity may be used to determine the second state noise, e.g., the actual average velocity v may be determinedwAnd theoretical average velocity f2(pw) The square of the difference between them is taken as the second state noise. The second mapping relationship may be obtained by performing statistical analysis on a plurality of pairs of average speed and average driving signal values acquired in advance. For example, in the normal running process of the robot, N pairs of average speed and average driving signal value are collected, and the N pairs of average speed and average driving signal value are fitted to obtain a second mapping relationship between the average speed and the average driving signal value, and a specific value of N may be set according to an actual situation, which is not limited herein.
Through the above steps, the state transition noise of the robot can be obtained, specifically, it can be expressed as:
Q=k1(f1(α)-ew)2+k2(f2(pw)-vw)2
in the above formula, Q represents the state transition noise of the robot, k1Representing the weight, k, corresponding to the first state noise2Represents the weight corresponding to the second state noise, (f)1(α)-ew)2Representing the first state noise, (f)2(pw)-vw)2Representing second state noise, ewRepresenting the actual speed difference, f1Representing a first mapping relation, alpha representing the actual steering angle, vwRepresenting the actual average speed, f2Representing a second mapping, pwRepresenting the average drive signal value.
In one disclosed implementation scenario, state transition noise and measurement interference noise may be obtained by the above steps; or, in actual application, the state transition noise may be obtained through the above steps according to an actual situation, and the measurement interference noise is set to a fixed value, for example, in consideration of an ideal situation, the measurement interference noise may be set to 0, that is, the state transition noise is directly used as the state noise of the robot, for example, the measurement interference noise may also be set to non-zero values such as 1, 2, and 3, for example, the measurement interference noise may also be set to white noise, which is not limited herein; or, according to the actual situation, the measurement interference noise may be obtained through the above steps, and the state transition noise is set to a fixed value, if an ideal situation is considered, the state transition noise may be set to 0, that is, the measurement interference noise is directly used as the state noise of the robot, for example, the state transition noise may also be set to non-zero values such as 1, 2, 3, and the like, and for example, the state transition noise may also be set to white noise, which is not limited herein.
Step S13: and obtaining the actual state information of the robot corresponding to the current moment by using the state noise.
Specifically, the actual state information of the robot at the previous time and the measured state information of the robot at the current time may be processed by using the state noise, so as to obtain the actual state information of the robot at the current time. For example, the actual state information of the robot corresponding to the current time may be obtained by processing the actual state information of the robot corresponding to the previous time and the measured state information of the robot corresponding to the current time by using kalman filtering in combination with state noise.
In a public implementation scenario, a filter gain may be determined based on state noise, and kalman filtering of the filter gain is used to predict actual state information of the robot corresponding to a previous time and actual driving information of the robot corresponding to the previous time to obtain predicted state information corresponding to a current time, and the predicted state information of the current time and measured state information of the current time are fused to obtain actual state information of the robot corresponding to the current time.
In a specific implementation scenario of the disclosure, please refer to fig. 2 in combination, where fig. 2 is a schematic flow chart of determining actual state information of a robot by using kalman filtering, specifically, the actual state information of the robot can be determined by using kalman filtering through the following steps:
step S21: and processing the posterior estimation covariance corresponding to the previous moment by using the state transition parameters and the state transition noise of the robot to obtain the prior estimation covariance corresponding to the current moment.
Specifically, taking the current time as the time k and the time k-1 before the current time as an example, it can be expressed as:
Pk-=APk-1AT+Q
in the above formula, Pk-Representing the prior estimated covariance, P, of the corresponding current time instantk-1Representing the posteriori estimated covariance corresponding to the previous time instant, the posteriori estimated covariance representing the actual state information of the previous time instant
Figure BDA0002651617690000141
Covariance of (i.e. actual state information of the previous moment
Figure BDA0002651617690000142
Uncertainty of (d). It should be noted that what is described in the implementation scenario of the present disclosure is to obtain the actual state information corresponding to the current time
Figure BDA0002651617690000143
So that the actual state information of the previous time
Figure BDA0002651617690000144
The steps disclosed in the disclosure implementation scenario can be referred to, and are not described in detail herein. The specific acquisition mode of the posteriori estimated covariance can be referred toIn view of the following description of the embodiments of the present disclosure, further description is omitted here. Further, a represents a state transition parameter of the robot in a matrix form, and the state transition parameter a is used to represent a motion model of the robot. For example, the state transition parameter a may be used to indicate that the robot is moving at a certain acceleration rate, or the robot is moving at a certain speed and a constant speed, which may be set by the user specifically, aTThe transpose of the state transition parameter is shown, and Q represents the state transition noise, and the specific calculation method can refer to the related description, which is not described herein again.
Step S22: and processing the prior estimation covariance corresponding to the current moment by using the transformation parameters from the state information to the measurement information and the measurement interference noise to obtain the filtering gain corresponding to the current moment.
Specifically, taking the current time as the time k and the previous time of the current time as the time k-1 as an example, it can be expressed as:
Figure BDA0002651617690000145
in the above formula, KkThe filter gain corresponding to the current time is represented, H represents a transformation parameter in a matrix form, and the transformation parameter H is used to describe a transformation relationship between the actual state information and the measurement state information, for example, may be used to describe that the actual state information and the measurement state information are in a linear relationship, specifically, the transformation parameter H may be set by a user, for example, the transformation parameter H may be set as an identity matrix, which is not limited herein, and H represents a transformation parameter in a matrix form, and H represents a transformation parameter for describing a transformation relationship between the actual state information and the measurement state informationTThe transpose of the transformation parameters is shown, and R represents the measured interference noise, and the specific calculation method can refer to the related description above, which is not described herein, and Pk-Representing the prior estimated covariance, P, for the current timek-Indicating predicted state information corresponding to the current time
Figure BDA0002651617690000146
Covariance of (2), i.e. predicted state information corresponding to the current time
Figure BDA0002651617690000147
For the uncertainty, the specific calculation method can refer to the related description above, and is not described herein again.
Thus, the filter gain corresponding to the current time instant can be determined by measuring the interference noise and the state transition noise. Specifically, at least one of the measurement interference noise and the state transition noise is calculated by the foregoing steps, for example, the measurement interference noise is calculated by the foregoing steps, or the state transition noise is calculated by the foregoing steps, or both the measurement interference noise and the state transition noise are calculated by the foregoing steps, which is not limited herein.
Step S23: and respectively processing the actual state information of the robot corresponding to the previous moment and the actual driving information of the robot corresponding to the previous moment by using the state transition parameters and the input state transition parameters of the robot to obtain the predicted state information corresponding to the current moment.
Specifically, taking the current time as the time k and the previous time of the current time as the time k-1 as an example, it can be expressed as:
Figure BDA0002651617690000151
in the above formula, the first and second carbon atoms are,
Figure BDA0002651617690000152
indicating the prediction state information corresponding to the current time,
Figure BDA0002651617690000153
representing the actual state information corresponding to the previous time, as described above, the implementation scenario of the present disclosure describes obtaining the actual state information corresponding to the current time
Figure BDA0002651617690000154
So that the actual state information of the previous time
Figure BDA0002651617690000155
Can refer toThe steps disclosed in the disclosure implementation scenario are obtained and are not described herein again. In particular, when k is equal to 0, the actual state information
Figure BDA0002651617690000156
Can be initialized to 0, uk-1The actual driving information corresponding to the previous time is represented, the actual driving information may include driving angle information, motor driving information, and driving speed information of the robot, and the specific obtaining manner may refer to the related description in the foregoing disclosed embodiment, which is not described herein again, a represents a state transition parameter of the robot, and may refer to the foregoing description specifically, and no further description is given herein, B represents an input state transition parameter, and B is used to describe a conversion relationship between the input actual driving information and the state information, so that the input actual driving information may be converted into the state information by inputting the state transition parameter B, and then combined with the actual state information corresponding to the previous time by the robot, to obtain predicted state information corresponding to the current time by the robot, that is, theoretically, the state information corresponding to the current time by the robot.
Step S24: and fusing the predicted state information of the current moment and the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
Specifically, taking the current time as the time k and the previous time of the current time as the time k-1 as an example, it can be expressed as:
Figure BDA0002651617690000157
in the above formula, the first and second carbon atoms are,
Figure BDA0002651617690000158
indicating the actual state information corresponding to the current time,
Figure BDA0002651617690000159
indicating predicted status information corresponding to the current time, KkRepresenting the filter gain, z, corresponding to the current time instantkCorrespond toThe measurement status information at the current time, H represents a conversion parameter from the status information to the measurement information, which may specifically refer to the related description above and is not described herein again, that is, it is to say that
Figure BDA0002651617690000161
Representing the residual between the measured state information and the predicted state information, using a filter gain KkCorrecting the residual error with the predicted state information to obtain the actual state information of the robot corresponding to the current moment
Figure BDA0002651617690000162
Step S25: and updating the prior estimation covariance corresponding to the current moment by using the filter gain and the transformation parameter to obtain the posterior estimation covariance corresponding to the current moment.
Specifically, taking the current time as the time k and the previous time of the current time as the time k-1 as an example, it can be expressed as:
Pk=(I-KkH)Pk-
in the above formula, PkRepresenting the covariance of the A-posteriori estimates corresponding to the current time, I representing the identity matrix, KkRepresenting the filter gain in matrix form, H representing the transformation parameters in matrix form, Pk-The a priori estimated covariance for the current time instant in the form of a matrix is represented. In particular, the covariance P is estimated a posteriori when k is equal to 0kA matrix set to all zeros may be initialized.
The prior estimated covariance corresponding to the previous moment is updated to obtain the posterior estimated covariance corresponding to the current moment, so that the actual state information corresponding to the next moment (i.e., the moment k + 1) can be determined by repeating the steps in the embodiment of the present disclosure, and in such a cycle, the actual state information corresponding to each moment of the robot can be determined during the running process of the robot, which is not described herein again in detail.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following information: the robot corresponds to the measured state information of a plurality of moments and the actual driving information of the robot corresponding to the current moment, and the state noise of the robot is determined based on the reference information, so that the actual state information of the robot corresponding to the current moment is obtained by using the state noise, and further, in the process of determining the state, simulation can be performed without using a large number of particles, and the speed of state determination is improved. In addition, the state noise is determined according to the measured state information at a plurality of moments and/or the actual driving information at the current moment, and the noise can be measured from the external measuring angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual condition, and the accuracy of the subsequently determined actual state information is improved.
In some disclosed embodiments, in order to enable a user to timely perceive abnormal state noise and improve user experience, a preset prompt may be performed when the state noise does not satisfy a preset noise condition. The preset prompt may be implemented in at least one of sound, light and text, for example, playing a prompt voice, or lighting a prompt lamp, or outputting a prompt text, and the like, which is not limited herein.
In a specific implementation scenario of the disclosure, the state noise may include measurement interference noise obtained by using a plurality of pieces of measurement state information, and the specific obtaining manner may refer to relevant steps in the foregoing embodiment of the disclosure, which is not described herein again. In addition, the preset noise condition may include that the measured interference noise is smaller than a first noise threshold, and a specific value of the first noise threshold may be set according to an actual situation. If the measured interference noise does not meet the preset noise condition, a first early warning message can be output to prompt that the state measurement is interfered, so that a user can sense the interference in time when the state measurement is interfered, and the user experience is improved.
In another specific implementation scenario of the disclosure, the state noise may include state transition noise obtained by using actual driving information at the current time, and the specific obtaining manner may refer to relevant steps in the foregoing embodiment of the disclosure, which is not described herein again. In addition, the preset noise condition may include that the state transition noise is smaller than a second noise threshold, and a specific value of the second noise threshold may be set according to an actual situation, which is not limited herein. If the state transition noise does not meet the preset noise condition, a second early warning message is output to prompt that the robot has the risk of vehicle body slip, so that a user can sense the risk of vehicle body slip in time when the robot has the risk of vehicle body slip, and user experience is improved.
The first warning message and the second warning message may be implemented in at least one of sound, light and text, for example, playing a prompt voice, or lighting a prompt lamp, or outputting a prompt text, and the like, which is not limited herein.
Referring to fig. 3, fig. 3 is a schematic diagram of a status determining apparatus 30 of a robot according to an embodiment of the present invention. The state determining apparatus 30 of the robot specifically includes a measurement state acquiring module 31, a state noise determining module 32, and an actual state acquiring module 33, where the measurement state acquiring module 31 is configured to acquire reference information of the robot, where the reference information includes at least one of: the robot corresponds to the measurement state information of a plurality of moments and corresponds to the actual running information of the current moment; the state noise determination module 32 is used for determining the state noise of the robot based on the reference information; the actual state obtaining module 33 is configured to obtain actual state information of the robot corresponding to the current time by using the pose noise.
According to the scheme, the reference information of the robot is acquired, and the reference information comprises at least one of the following information: the robot corresponds to the measured state information of a plurality of moments and the actual driving information of the robot corresponding to the current moment, and the state noise of the robot is determined based on the reference information, so that the actual state information of the robot corresponding to the current moment is obtained by using the state noise, and further, in the process of determining the state, simulation can be performed without using a large number of particles, and the speed of state determination is improved. In addition, the state noise is determined according to the measured state information at a plurality of moments and/or the actual driving information at the current moment, and the noise can be measured from the external measuring angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual condition, and the accuracy of the subsequently determined actual state information is improved.
In some disclosed embodiments, the state noise determination module 32 includes a measurement interference determination submodule for determining a measurement interference noise of the robot using the measurement state information corresponding to the current time and several times before the current time, and the state noise determination module 32 includes a state transition determination submodule for determining a state transition noise of the robot using the actual travel information at the current time.
Different from the embodiment disclosed in the foregoing, the measurement interference noise of the robot is determined by using the measurement state information corresponding to the current time and a plurality of times before the current time, so that the noise of the robot can be determined from an external measurement angle, and the external interference of the robot in the driving process can be measured; the state transition noise of the robot is determined by using the actual driving information at the current moment, so that the noise of the robot can be determined from the perspective of the self state of the robot, and the internal interference of the robot in the driving process can be measured.
In some disclosed embodiments, the measurement interference determination submodule includes a dispersion obtaining unit configured to obtain a dispersion degree of the measurement state information at a current time and a plurality of times before the current time, and the measurement interference determination submodule includes a noise determination unit configured to determine the measurement interference noise using the dispersion degree.
Different from the embodiment disclosed in the foregoing, the discrete degree of the measurement state information at the current time and a plurality of times before the current time is utilized, and the measurement interference noise is determined by utilizing the discrete degree, so that the external interference of the robot in the driving process can be accurately measured.
In some disclosed embodiments, the degree of dispersion of the measurement state information at the current time and some time before the current time is the standard deviation of the measurement state information at the current time and some time before the current time.
Different from the embodiment disclosed in the foregoing, the discrete degree of the measurement state information at the current time and a plurality of times before the current time is set as the standard deviation of the measurement state information at the current time and a plurality of times before the current time, so that the complexity and the calculation amount for determining the discrete degree can be reduced, and the speed for determining the state can be increased.
In some disclosed embodiments, the noise determination unit is specifically configured to use a product between the degree of dispersion and a preset gain parameter as the measured interference noise.
Different from the embodiment disclosed in the foregoing, the product between the degree of dispersion and the preset gain parameter is used as the measurement interference noise, which can be beneficial to improving the accuracy of the measurement interference noise and improving the accuracy of the state determination.
In some disclosed embodiments, the actual travel information includes travel angle information, motor drive information, and travel speed information of the robot, and the state transition determination submodule is specifically configured to obtain a state transition noise of the robot using at least one of the first state noise and the second state noise; wherein the first state noise is determined using the travel angle information and the travel speed information, and the second state noise is determined using the motor drive information and the travel speed information.
Different from the embodiment disclosed in the foregoing, the actual driving information is set to include the driving angle information, the motor driving information, and the driving speed information of the robot, so that the state transition noise of the robot is obtained by using at least one of the first state noise and the second state noise, where the first state noise is determined by using the driving angle information and the driving speed information, and the second state noise is determined by using the motor driving information and the driving speed information, which can be beneficial to improving the accuracy of the state transition noise.
In some disclosed embodiments, the robot includes driving wheels for driving the robot to travel and steering wheels for changing a travel direction of the robot, the travel speed information includes an actual speed difference between the driving wheels of the robot, the travel angle information includes an actual steering angle of the steering wheels of the robot, the first state noise determining unit includes a first mapping subunit for mapping the actual steering angle using a first mapping relationship between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle, and the first state noise determining unit includes a first state noise determining subunit for determining the first state noise using a difference between the actual speed difference and the theoretical speed difference.
Different from the embodiment disclosed in the foregoing, the travel speed information is set to include an actual speed difference between the robot driving wheels, and the travel angle information is set to include an actual steering angle of the robot steering wheel, so that the actual steering angle is mapped by using a first mapping relationship between the speed difference and the steering angle, a theoretical speed difference corresponding to the actual steering angle is obtained, and the first state noise is determined by using a difference between the actual speed difference and the theoretical speed difference, so that the first state noise of the robot can be determined from the angle of the robot steering wheel.
In some disclosed embodiments, the robot includes driving wheels for driving the robot to travel, the travel speed information includes an actual average speed of the driving wheels of the robot, the motor driving information includes an actual average driving signal value of a motor of the robot, the second state noise determining unit includes a second mapping subunit for performing a mapping process on the actual average driving signal value using a second mapping relationship between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value, and the second state noise determining unit includes a second state noise determining subunit for determining the second state noise using a difference between the actual average speed and the theoretical average speed.
Unlike the previously disclosed embodiment, the travel speed information is set to include the actual average speed of the robot driving wheels, and the motor driving information is set to include the actual average driving signal value of the robot motor, so that the actual average driving signal value is mapped using the second mapping relationship between the average speed and the average driving signal value to obtain the theoretical average speed corresponding to the actual average driving signal value, and the second state noise is determined using the difference between the actual average speed and the theoretical average speed, so that the second state noise of the robot can be determined from the perspective of the robot driving wheels.
In some disclosed embodiments, the first state noise determination subunit is specifically configured to take a square of a difference between the actual speed difference and the theoretical speed difference as the first state noise, and the second state noise determination subunit is specifically configured to take a square of a difference between the actual average speed and the theoretical average speed as the second state noise.
Different from the foregoing disclosed embodiment, taking the square of the difference between the actual speed difference and the theoretical speed difference as the first state noise and the square of the difference between the actual average speed and the theoretical average speed as the second state noise can reduce the complexity and the calculation amount of the calculation of the first state noise and the second state noise, which is beneficial to improving the speed of state determination.
In some disclosed embodiments, the actual state obtaining module 33 is specifically configured to, by using the state noise, obtain actual state information of the robot corresponding to the current time, where the actual state information includes: and processing the actual state information of the robot at the previous moment and the measured state information of the robot at the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
Different from the embodiment, the measured state information corresponding to the current moment and the actual state information corresponding to the previous moment of the robot are processed by using the state noise, so that the robot can balance the measured state information at the current moment and the actual state information at the previous moment, the determined actual state information is corrected relative to the measured state information, and the accuracy of determining the state of the robot can be improved.
In some disclosed embodiments, the actual state obtaining module 33 is specifically configured to determine a filter gain based on the state noise, predict, by using kalman filtering of the filter gain, actual state information of the robot corresponding to a previous time and actual driving information of the robot corresponding to the previous time to obtain predicted state information corresponding to a current time, and fuse the predicted state information of the current time and measured state information of the current time to obtain actual state information of the robot corresponding to the current time.
Different from the embodiment disclosed in the foregoing, the filtering gain is determined based on the state noise, and the kalman filtering of the filtering gain is used to predict the actual state information of the robot corresponding to the previous time and the actual driving information of the previous time to obtain the predicted state information corresponding to the current time, and the predicted state information of the current time is fused with the measured state information of the current time, so that the robustness to the external signal can be enhanced, and the actual state information corresponding to the current time can be accurately determined.
In some disclosed embodiments, the state determining apparatus 30 of the robot further includes a prompting module for performing a preset prompt when the state noise does not satisfy the preset noise condition.
Different from the embodiment disclosed in the foregoing, when the state noise does not satisfy the preset noise condition, the preset prompt is performed, so that the user can perceive the abnormal state noise, and the user experience is improved.
In some disclosed embodiments, the state noise comprises: the measurement interference noise obtained by utilizing the information of the plurality of measurement states is preset with noise conditions including: the measuring interference noise is smaller than a first noise threshold value, and the prompting module comprises a first early warning sub-module which is used for outputting a first early warning message when the measuring interference noise does not meet a preset noise condition so as to prompt that the state measurement is interfered; and/or, the state noise comprises: state transition noise obtained by using actual driving information at the current moment; presetting a noise condition packet: the state transition noise is smaller than a second noise threshold, and the prompting module comprises a second early warning submodule and is used for outputting a second early warning message when the state transition noise does not meet a preset noise condition so as to prompt that the robot has the risk of vehicle body slip.
Different from the embodiment disclosed in the foregoing, when the measurement interference noise does not satisfy the preset condition, the first warning message is output to prompt that the state measurement is interfered, so that the user can sense the state measurement in time when the state measurement is interfered, and the user experience is improved; when the state transition noise does not meet the preset condition, a second early warning message is output to prompt that the robot has the risk of vehicle body slip, so that a user can timely sense when the robot has the risk of vehicle body slip, and the user experience is improved.
In some disclosed embodiments, the measurement state obtaining module 31 includes a data collecting sub-module, configured to collect an image of a surrounding environment of the robot to obtain environment image data corresponding to a current time, and the measurement state obtaining module 31 includes a measurement state determining sub-module, configured to determine measurement state information corresponding to the current time based on the environment image data at the current time, where the measurement state information and the actual state information both include at least one of: the position of the robot, the attitude of the robot, the speed of the robot.
Different from the embodiment disclosed above, the number of the environment image users corresponding to the current moment is obtained by collecting the images of the environment around the robot, the measurement state information corresponding to the current moment of the robot is determined based on the environment image data at the current moment, and the measurement state information and the actual state information are set to at least one of the position of the robot, the posture of the robot and the speed of the robot, so that the measurement state information corresponding to the current moment of the robot can be quickly obtained, and the speed of determining the state of the robot can be improved.
Referring to fig. 4, fig. 4 is a schematic diagram of a framework of an embodiment of a robot 40 according to the present application. The robot 40 comprises a robot body 41, and a memory 42 and a processor 43 arranged on the robot body 41, the memory 42 and the processor 43 being coupled to each other, the processor 43 being configured to execute program instructions stored in the memory 42 to implement the steps of any of the above-described state determination method embodiments.
In particular, the processor 43 is adapted to control itself and the memory 42 to implement the steps of any of the above-described embodiments of the state determination method. The processor 43 may also be referred to as a CPU (Central Processing Unit). The processor 43 may be an integrated circuit chip having signal processing capabilities. The Processor 43 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 43 may be commonly implemented by an integrated circuit chip.
According to the scheme, the state noise is determined according to the obtained plurality of pieces of measured state information and/or the current actual driving information, and the noise can be measured from the external measuring angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual condition, and the accuracy of the subsequently determined actual state information is improved.
In some disclosed embodiments, the robot 40 further includes a plurality of wheels disposed on the robot body 41, and a motor for driving the wheels to walk and a steering engine for driving the wheels to steer. For example, the robot comprises a first wheel set and a second wheel set, wherein the first wheel set is connected with a motor to be used as a driving wheel, and the second wheel set is connected with a steering engine to be used as a steering wheel. In addition, in order to obtain the travel information of the robot, the robot 40 may further include a speed measuring component, which may be provided on the driving wheels, for obtaining the speed of the driving wheels. In a specific application scenario, the robot 40 includes 4 wheels, wherein two front wheels are used as steering wheels and two rear wheels are used as driving wheels, and each rear wheel is provided with an encoder to obtain the speed of the corresponding rear wheel. Therefore, the robot can obtain the running speed by reading the encoder and obtain the running angle by reading the control record of the steering engine. In addition, the robot body 41 can be configured to have different shapes according to different practical application requirements. For example, for express delivery applications, the robot body 41 may be provided to have the appearance of a car type such as a car, a minibus, or the like; alternatively, for the service guiding application, the robot body 41 may be configured to have a general human shape, a cartoon animal shape, and the like, and may be specifically configured according to the actual application requirement, which is not illustrated herein.
In some disclosed embodiments, to achieve obtaining the measurement state information, the robot 40 may further be provided with an image pickup device, whereby the measurement state information of the robot 40 is determined by an environmental image taken by the image pickup device.
Referring to fig. 5, fig. 5 is a block diagram illustrating an embodiment of a computer-readable storage medium 50 according to the present application. The computer readable storage medium 50 stores program instructions 501 capable of being executed by a processor, the program instructions 501 being for implementing the steps of any of the above-described state determination method embodiments.
According to the scheme, the state noise is determined according to the obtained plurality of pieces of measured state information and/or the current actual driving information, and the noise can be measured from the external measuring angle of the robot and/or the self state angle of the robot, so that the state noise is closer to the actual condition, and the accuracy of the subsequently determined actual state information is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A method for determining a state of a robot, comprising:
acquiring reference information of the robot; wherein the reference information comprises at least one of: the robot corresponds to the measurement state information at a plurality of moments and corresponds to the actual running information at the current moment;
determining a state noise of the robot based on the reference information;
obtaining actual state information of the robot corresponding to the current moment by using the state noise;
if the state noise does not meet the preset noise condition, performing preset prompt;
wherein the state noise comprises: measuring interference noise obtained by using the measurement state information of the current moment and a plurality of moments before the current moment, wherein the preset noise condition comprises: the measurement interference noise is smaller than a first noise threshold, and if the state noise does not satisfy a preset noise condition, the performing of the preset prompt includes:
if the measured interference noise does not meet the preset noise condition, outputting a first early warning message to prompt that state measurement is interfered;
and/or, the state noise comprises: and state transition noise obtained by using the actual driving information at the current moment, wherein the preset noise condition comprises: the state transition noise is smaller than a second noise threshold, and if the state noise does not satisfy a preset noise condition, the performing of the preset prompt includes:
and if the state transition noise does not meet the preset noise condition, outputting a second early warning message to prompt that the robot has the risk of vehicle body slip.
2. The method of claim 1, wherein the determining the state noise of the robot based on the reference information comprises:
determining the measurement interference noise of the robot by using the measurement state information corresponding to the current moment and a plurality of moments before the current moment;
and/or determining the state transition noise of the robot by using the actual running information at the current moment.
3. The method of claim 2, wherein determining the measured interference noise of the robot using the measured state information corresponding to the current time and a number of times prior to the current time comprises:
obtaining the discrete degree of the measurement state information at the current moment and a plurality of moments before the current moment;
determining the measurement interference noise using the degree of dispersion.
4. The method according to claim 3, wherein the dispersion degree of the measurement status information at the current time and several moments before the current time is the standard deviation of the measurement status information at the current time and several moments before the current time; and/or
Said determining said measured interference noise using said degree of dispersion comprises:
and taking the product of the discrete degree and a preset gain parameter as the measurement interference noise.
5. The method according to any one of claims 2 to 4, wherein the actual travel information includes travel angle information, motor drive information, and travel speed information of the robot;
the determining the state transition noise of the robot by using the actual travel information at the current time includes:
obtaining the state transition noise of the robot by using at least one of the first state noise and the second state noise;
wherein the first state noise is determined using the travel angle information and the travel speed information, and the second state noise is determined using the motor drive information and the travel speed information.
6. The method of claim 5, wherein the robot includes driving wheels for driving the robot to travel and steering wheels for changing a travel direction of the robot, the travel speed information includes an actual speed difference between the robot driving wheels, and the travel angle information includes an actual steering angle of the robot steering wheels; before the deriving the state transition noise of the robot using at least one of the first state noise, the second state noise, the method further comprises:
mapping the actual steering angle by using a first mapping relation between the speed difference and the steering angle to obtain a theoretical speed difference corresponding to the actual steering angle;
determining the first state noise using a difference between the actual speed difference and the theoretical speed difference; and/or the presence of a gas in the gas,
the robot comprises driving wheels, the driving wheels are used for driving the robot to run, the running speed information comprises the actual average speed of the driving wheels of the robot, and the motor driving information comprises the actual average driving signal value of the motor of the robot; before the deriving the state transition noise of the robot using at least one of the first state noise, the second state noise, the method further comprises:
mapping the actual average driving signal value by using a second mapping relation between the average speed and the average driving signal value to obtain a theoretical average speed corresponding to the actual average driving signal value;
determining the second state noise using a difference between the actual average velocity and the theoretical average velocity.
7. The method of claim 6, wherein said determining the first state noise using the difference between the actual speed difference and the theoretical speed difference comprises:
taking a square of a difference between the actual speed difference and the theoretical speed difference as the first state noise;
the determining the second state noise using the difference between the actual average velocity and the theoretical average velocity includes:
taking the square of the difference between the actual average velocity and the theoretical average velocity as the second state noise.
8. The method of claim 1, wherein the obtaining, by using the state noise, actual state information of the robot corresponding to the current time comprises:
and processing the actual state information of the robot at the previous moment and the measured state information of the robot at the current moment by using the state noise to obtain the actual state information of the robot at the current moment.
9. The method according to claim 8, wherein the processing the actual state information of the robot corresponding to the previous time and the measured state information of the robot corresponding to the current time by using the state noise to obtain the actual state information of the robot corresponding to the current time comprises:
determining a filter gain based on the state noise;
predicting actual state information of the robot corresponding to the previous moment and actual running information of the robot corresponding to the previous moment by using Kalman filtering of the filtering gain to obtain predicted state information corresponding to the current moment;
and fusing the predicted state information of the current moment with the measured state information of the current moment to obtain the actual state information of the robot corresponding to the current moment.
10. The method of claim 1, wherein the obtaining reference information for the robot comprises:
acquiring images of the surrounding environment of the robot to obtain environment image data corresponding to the current moment;
determining the measurement state information of the robot corresponding to the current moment based on the environmental image data of the current moment;
the measurement status information and the actual status information each include at least one of: a position of the robot, a pose of the robot, a velocity of the robot.
11. A state determining apparatus of a robot, characterized by comprising:
the measurement state acquisition module is used for acquiring reference information of the robot; wherein the reference information comprises at least one of: the robot corresponds to the measurement state information at a plurality of moments and corresponds to the actual running information at the current moment;
a state noise determination module for determining state noise of the robot based on the reference information;
the actual state acquisition module is used for acquiring actual state information of the robot corresponding to the current moment by using the state noise;
the prompting module is used for carrying out preset prompting when the state noise does not meet the preset noise condition;
wherein the state noise comprises: measuring interference noise obtained by using the measurement state information of the current moment and a plurality of moments before the current moment, wherein the preset noise condition comprises: the measurement interference noise is smaller than a first noise threshold, and the prompting module comprises a first early warning sub-module and is used for outputting a first early warning message to prompt that the state measurement is interfered when the measurement interference noise does not meet the preset noise condition; and/or, the state noise comprises: and state transition noise obtained by using the actual driving information at the current moment, wherein the preset noise condition comprises: the state transition noise is smaller than a second noise threshold, and the prompting module comprises a second early warning sub-module and is used for outputting a second early warning message when the state transition noise does not meet the preset noise condition so as to prompt that the robot has the risk of vehicle body slippage.
12. A robot comprising a robot body, and a memory and a processor disposed on the robot body, the processor and the memory being coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the state determination method of any one of claims 1 to 10.
13. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the state determination method of any one of claims 1 to 10.
CN202010872662.3A 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium Active CN112025706B (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
CN202010872662.3A CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111515984.3A CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111475203.2A CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
KR1020227019722A KR20220084434A (en) 2020-08-26 2021-04-19 State determining method and apparatus, robot, storage medium, and computer program
KR1020217039198A KR102412066B1 (en) 2020-08-26 2021-04-19 State determination method and device, robot, storage medium and computer program
PCT/CN2021/088224 WO2022041797A1 (en) 2020-08-26 2021-04-19 State determining method and apparatus, robot, storage medium, and computer program
JP2021566210A JP2022550231A (en) 2020-08-26 2021-04-19 State determination method and device, robot, storage medium and computer program
KR1020227019723A KR20220084435A (en) 2020-08-26 2021-04-19 State determining method and apparatus, robot, storage medium, and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010872662.3A CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202111475203.2A Division CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111515984.3A Division CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112025706A CN112025706A (en) 2020-12-04
CN112025706B true CN112025706B (en) 2022-01-04

Family

ID=73579964

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010872662.3A Active CN112025706B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111515984.3A Active CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111475203.2A Active CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202111515984.3A Active CN114260890B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium
CN202111475203.2A Active CN114131604B (en) 2020-08-26 2020-08-26 Method and device for determining state of robot, robot and storage medium

Country Status (4)

Country Link
JP (1) JP2022550231A (en)
KR (3) KR20220084434A (en)
CN (3) CN112025706B (en)
WO (1) WO2022041797A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025706B (en) * 2020-08-26 2022-01-04 北京市商汤科技开发有限公司 Method and device for determining state of robot, robot and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090028274A (en) * 2007-09-14 2009-03-18 삼성전자주식회사 Apparatus and method for calculating position of robot
CN108128308A (en) * 2017-12-27 2018-06-08 长沙理工大学 A kind of vehicle state estimation system and method for distributed-driving electric automobile
CN108896049A (en) * 2018-06-01 2018-11-27 重庆锐纳达自动化技术有限公司 A kind of motion positions method in robot chamber
CN109443356A (en) * 2019-01-07 2019-03-08 大连海事大学 A kind of the unmanned boat Position And Velocity estimation structure and design method of the noise containing measurement
CN110422175A (en) * 2019-07-31 2019-11-08 上海智驾汽车科技有限公司 Vehicle state estimation method and device, electronic equipment, storage medium, vehicle
CN110861123A (en) * 2019-11-14 2020-03-06 华南智能机器人创新研究院 Method and device for visually monitoring and evaluating running state of robot
CN111044053A (en) * 2019-12-31 2020-04-21 三一重工股份有限公司 Navigation method and device of single-steering-wheel unmanned vehicle and single-steering-wheel unmanned vehicle
CN111136660A (en) * 2020-02-19 2020-05-12 清华大学深圳国际研究生院 Robot pose positioning method and system

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0663930B2 (en) * 1989-10-04 1994-08-22 日産自動車株式会社 Vehicle state quantity estimation device
JP2002331478A (en) * 2001-05-02 2002-11-19 Yaskawa Electric Corp Operating speed determining method for robot
KR101234797B1 (en) * 2006-04-04 2013-02-20 삼성전자주식회사 Robot and method for localization of the robot using calculated covariance
KR100877071B1 (en) * 2007-07-18 2009-01-07 삼성전자주식회사 Method and apparatus of pose estimation in a mobile robot based on particle filter
KR101038581B1 (en) * 2008-10-31 2011-06-03 한국전력공사 Method, system, and operation method for providing surveillance to power plant facilities using track-type mobile robot system
KR101086364B1 (en) * 2009-03-20 2011-11-23 삼성중공업 주식회사 Robot parameter estimation method using Kalman filter
JP5803155B2 (en) * 2011-03-04 2015-11-04 セイコーエプソン株式会社 Robot position detection device and robot system
CN102862666B (en) * 2011-07-08 2014-12-10 中国科学院沈阳自动化研究所 Underwater robot state and parameter joint estimation method based on self-adaption unscented Kalman filtering (UKF)
KR101390776B1 (en) * 2013-03-14 2014-04-30 인하대학교 산학협력단 Localization device, method and robot using fuzzy extended kalman filter algorithm
KR102009481B1 (en) * 2013-12-26 2019-08-09 한화디펜스 주식회사 Apparatus and method for controllling travel of vehicle
US9517561B2 (en) * 2014-08-25 2016-12-13 Google Inc. Natural pitch and roll
JP6541026B2 (en) * 2015-05-13 2019-07-10 株式会社Ihi Apparatus and method for updating state data
KR101789776B1 (en) * 2015-12-09 2017-10-25 세종대학교산학협력단 Bias correcting apparatus for yaw angle estimation of mobile robots and method thereof
CN106156790B (en) * 2016-06-08 2020-04-14 北京工业大学 Distributed cooperation algorithm and data fusion mechanism applied to sensor network
JP6770393B2 (en) * 2016-10-04 2020-10-14 株式会社豊田中央研究所 Tracking device and program
KR20180068102A (en) * 2016-12-13 2018-06-21 주식회사 큐엔티 Method and server for providing robot fault monitoring prognostic service
CN106956282B (en) * 2017-05-18 2019-09-13 广州视源电子科技股份有限公司 Angular acceleration determination method, angular acceleration determination device, robot and storage medium
CN107644441A (en) * 2017-08-30 2018-01-30 南京大学 Multi-foot robot complex road condition based on three-dimensional imaging is separated into point methods of stopping over
CN107748562A (en) * 2017-09-30 2018-03-02 湖南应用技术学院 A kind of comprehensive service robot
CN109959381B (en) * 2017-12-22 2021-06-04 深圳市优必选科技有限公司 Positioning method, positioning device, robot and computer readable storage medium
CN110361003B (en) * 2018-04-09 2023-06-30 中南大学 Information fusion method, apparatus, computer device and computer readable storage medium
CN108710295B (en) * 2018-04-20 2021-06-18 浙江工业大学 Robot following method based on progressive volume information filtering
CN108621161B (en) * 2018-05-08 2021-03-02 中国人民解放军国防科技大学 Method for estimating body state of foot type robot based on multi-sensor information fusion
CN108645415A (en) * 2018-08-03 2018-10-12 上海海事大学 A kind of ship track prediction technique
CN109813307A (en) * 2019-02-26 2019-05-28 大连海事大学 A kind of navigation system and its design method of unmanned boat Fusion
CN112025706B (en) * 2020-08-26 2022-01-04 北京市商汤科技开发有限公司 Method and device for determining state of robot, robot and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090028274A (en) * 2007-09-14 2009-03-18 삼성전자주식회사 Apparatus and method for calculating position of robot
CN108128308A (en) * 2017-12-27 2018-06-08 长沙理工大学 A kind of vehicle state estimation system and method for distributed-driving electric automobile
CN108896049A (en) * 2018-06-01 2018-11-27 重庆锐纳达自动化技术有限公司 A kind of motion positions method in robot chamber
CN109443356A (en) * 2019-01-07 2019-03-08 大连海事大学 A kind of the unmanned boat Position And Velocity estimation structure and design method of the noise containing measurement
CN110422175A (en) * 2019-07-31 2019-11-08 上海智驾汽车科技有限公司 Vehicle state estimation method and device, electronic equipment, storage medium, vehicle
CN110861123A (en) * 2019-11-14 2020-03-06 华南智能机器人创新研究院 Method and device for visually monitoring and evaluating running state of robot
CN111044053A (en) * 2019-12-31 2020-04-21 三一重工股份有限公司 Navigation method and device of single-steering-wheel unmanned vehicle and single-steering-wheel unmanned vehicle
CN111136660A (en) * 2020-02-19 2020-05-12 清华大学深圳国际研究生院 Robot pose positioning method and system

Also Published As

Publication number Publication date
KR20220027832A (en) 2022-03-08
WO2022041797A1 (en) 2022-03-03
KR102412066B1 (en) 2022-06-22
KR20220084435A (en) 2022-06-21
JP2022550231A (en) 2022-12-01
CN114260890A (en) 2022-04-01
CN114131604A (en) 2022-03-04
CN114131604B (en) 2023-11-03
CN112025706A (en) 2020-12-04
CN114260890B (en) 2023-11-03
KR20220084434A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
JP7211307B2 (en) Distance estimation using machine learning
US11479243B2 (en) Uncertainty prediction based deep learning
CN111332309B (en) Driver monitoring system and method of operating the same
CN113672845A (en) Vehicle track prediction method, device, equipment and storage medium
JP6544594B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, PROGRAM, AND VEHICLE
EP3107069A1 (en) Object detection apparatus, object detection method, and mobile robot
US20220292837A1 (en) Monocular depth supervision from 3d bounding boxes
US11403854B2 (en) Operating assistance method, control unit, operating assistance system and working device
CN112025706B (en) Method and device for determining state of robot, robot and storage medium
US10916018B2 (en) Camera motion estimation device, camera motion estimation method, and computer program product
CN113469042A (en) Truth value data determination, neural network training and driving control method and device
WO2020003764A1 (en) Image processing device, moving apparatus, method, and program
CN115115530B (en) Image deblurring method, device, terminal equipment and medium
CN116580376A (en) Learning method, learning device, moving body control method, and storage medium
US11983936B2 (en) Data collection device, vehicle control device, data collection system, data collection method, and storage medium
Wang et al. Time-to-contact control: improving safety and reliability of autonomous vehicles
JP7359084B2 (en) Emotion estimation device, emotion estimation method and program
WO2021230314A1 (en) Measurement system, vehicle, measurement device, measurement program, and measurement method
CN113034595B (en) Method for visual localization and related device, apparatus, storage medium
CN115880763A (en) Operator takeover prediction
JP2021190025A (en) Information processing device
CN117911979A (en) Data synchronization method, device, equipment and storage medium
JP2021047479A (en) Estimation device, estimation method and program
CN116466685A (en) Evaluation method, device, equipment and medium for automatic driving perception algorithm
JP2018131172A (en) Monitoring device, monitoring method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40040554

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant