JP2020173744A - Image processing method using machine learning and electronic control device using it - Google Patents

Image processing method using machine learning and electronic control device using it Download PDF

Info

Publication number
JP2020173744A
JP2020173744A JP2019076767A JP2019076767A JP2020173744A JP 2020173744 A JP2020173744 A JP 2020173744A JP 2019076767 A JP2019076767 A JP 2019076767A JP 2019076767 A JP2019076767 A JP 2019076767A JP 2020173744 A JP2020173744 A JP 2020173744A
Authority
JP
Japan
Prior art keywords
sensor
data
output
control system
control device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2019076767A
Other languages
Japanese (ja)
Inventor
勇気 田中
Yuki Tanaka
勇気 田中
辰也 堀口
Tatsuya HORIGUCHI
辰也 堀口
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP2019076767A priority Critical patent/JP2020173744A/en
Publication of JP2020173744A publication Critical patent/JP2020173744A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

To provide a method capable of improving the accuracy of recognition peripheral of an autonomous mobile even when sensors cannot be used under conditions where modeling of the surrounding environment is impossible by preventing false detection and/or false recognition of the surrounding environment while improving the operational stability and reliability of the autonomous mobile.SOLUTION: The control system has a control unit that includes: multiple monitoring sensors 1-n; a data processing unit that learns and processes information input from the multiple monitoring sensors. The data processing unit includes means configured to perform the learning and learning using data at the time of sensor failure to output as a fixed value when the monitoring sensor fails.SELECTED DRAWING: Figure 2

Description

本発明は,機械学習の学習方式,画像処理方式,センサ情報統合演算方式,およびそれらを用いた電子制御装置に関する。 The present invention relates to a machine learning learning method, an image processing method, a sensor information integrated calculation method, and an electronic control device using them.

近年,画像処理関連分野における機械学習関連技術の適用が大きく進展し,物体種別の識別に留まらず,空間内の空き領域検知,動体の移動方向や速度変化の予測等に展開されている。これらの認識技術は,例えば自律移動を行うロボットや自動車(以下,自律移動体と呼称する)において,移動制御の前提となる,複雑な周辺環境の認識技術として大きく期待されている。 In recent years, the application of machine learning-related technology in the field of image processing has made great progress, and it has been expanded not only to the identification of object types but also to the detection of free areas in space and the prediction of moving directions and speed changes of moving objects. These recognition technologies are highly expected as recognition technologies for complex surrounding environments, which are the premise of movement control, for example, in robots and automobiles (hereinafter referred to as autonomous mobile bodies) that perform autonomous movement.

一方で,これら自律移動体,特に操作者不在の条件下,オープンな環境下で完全自律制御を行うような自律移動体においては,高い動作の信頼性や安定性が求められる。 On the other hand, these autonomous mobiles, especially those that perform fully autonomous control in an open environment under the condition of the absence of an operator, are required to have high reliability and stability of operation.

例えば自動運転車における周辺物体認識では,センサ特性や環境要因などに起因する認識能力低下,またセンサ自体に発生する故障など,自律移動を行うにあたって衝突の危険性が想定されるような認識性能不十分状況(以下,センサ利用不能状況)が危惧される。 For example, in the recognition of peripheral objects in autonomous vehicles, the recognition performance is poor, such as deterioration of recognition ability due to sensor characteristics and environmental factors, and failure that occurs in the sensor itself, which may cause a collision when performing autonomous movement. There is concern about a sufficient situation (hereinafter referred to as the sensor unavailability situation).

このような状況下においても,自律移動体に対して一定の信頼性を担保しつつ,安全状態への制御移行,もしくは制御継続が求められることから,その制御の前提条件となる周辺認識にも,故障耐性の付与についての検討は必須である。 Even under such circumstances, it is required to shift to a safe state or continue to control while ensuring a certain degree of reliability for the autonomous mobile body, so peripheral recognition, which is a prerequisite for the control, is also required. , It is indispensable to consider the provision of fault tolerance.

一般に,一定の信頼性が求められる自律移動体においては,センサ利用不能状況に対応するため,同一種複数,もしくは複数種別のセンサを組み合わせ,その認識情報を統合することで,一部センサの失陥や識別能力低下に対応する方式が取られる。 Generally, in an autonomous mobile body that requires a certain level of reliability, some sensors may be lost by combining multiple sensors of the same type or multiple types of sensors and integrating their recognition information in order to respond to the situation where sensors cannot be used. A method is adopted to deal with the fall and the deterioration of the discrimination ability.

以下に示す特許文献1では,複数種のセンサを用いることで,特定のセンサ失陥に対し,他センサ取得情報を用いてセンサ観測値の推定を行い,センサ失陥をカバーすることで制御対象であるエンジンの動作継続を行う例が開示されている。 In Patent Document 1 shown below, by using a plurality of types of sensors, sensor observation values are estimated using information acquired by other sensors for a specific sensor failure, and the control target is covered by covering the sensor failure. An example of continuing the operation of the engine is disclosed.

特開2004-340151号公報Japanese Unexamined Patent Publication No. 2004-340151

一方で,特許文献1に示した方式は,数理的にモデリング可能な物理現象(エンジン動作)の上において,失陥していない複数センサの観測情報に基づき,失陥したセンサの観測情報を推測し代替するものである。この考え方を自律移動体に適用する場合,物理法則に基づきある程度のモデリングが可能となるケースとは異なり,自律移動体の周辺環境モデリングは不確定な要素を多く含むことから,一般には困難であると想定される。この困難なケースには,例えば物陰に隠れている物体のような直接観測が難しいケースや,複雑な挙動を示す人間の行動等が含まれる。そのため,複数センサを用いた観測と,その観測情報の統合演算の精度が重要となる。 On the other hand, the method shown in Patent Document 1 estimates the observation information of a failed sensor based on the observation information of a plurality of sensors that have not failed on a physical phenomenon (engine operation) that can be mathematically modeled. It is an alternative. When applying this idea to autonomous mobiles, it is generally difficult to model the surrounding environment of autonomous mobiles because it contains many uncertainties, unlike the case where modeling is possible to some extent based on the laws of physics. Is assumed. This difficult case includes, for example, a case where direct observation is difficult, such as an object hidden in the shadow, and a human behavior showing complicated behavior. Therefore, the accuracy of observations using multiple sensors and the integrated calculation of the observation information is important.

そこで,本発明の目的は,自律移動体のような周辺環境のモデル化が困難な条件下におけるセンサ利用不能状況に対しても,周辺環境の誤検知,誤認識を防ぐ方法を提供することにより,自律移動体の周辺認識制度を向上,ひいては自律移動体の動作安定性,信頼性を向上する方式を提供することにある。 Therefore, an object of the present invention is to provide a method for preventing false detection and false recognition of the surrounding environment even in a situation where the sensor cannot be used under conditions where it is difficult to model the surrounding environment such as an autonomous mobile body. The purpose is to improve the peripheral recognition system of autonomous mobiles, and to provide a method to improve the operational stability and reliability of autonomous mobiles.

上記目的を達成するために,本発明の制御システムは,複数の監視センサ1〜nと, 前記複数の監視センサからの入力情報を学習および処理するデータ処理部を備えた制御装置から構成され,前記データ処理部は前記学習に加えてセンサ故障時のデータを使用した学習を行い, 前記監視センサが故障時にその出力を固定値とする手段を備える。 In order to achieve the above object, the control system of the present invention is composed of a plurality of monitoring sensors 1 to n and a control device including a data processing unit that learns and processes input information from the plurality of monitoring sensors. In addition to the learning, the data processing unit performs learning using data at the time of sensor failure, and includes means for setting the output of the monitoring sensor to a fixed value when the sensor fails.

本発明によれば,自律移動体におけるセンサ利用不能状況に対し,周辺環境の認識制度を保つことで,自律移動体の動作信頼性を高めることが可能となる。 According to the present invention, it is possible to improve the operation reliability of the autonomous mobile body by maintaining the recognition system of the surrounding environment in the situation where the sensor cannot be used in the autonomous mobile body.

本発明の第1の実施例における,自律移動体の構成概略を示す図である。It is a figure which shows the structural outline of the autonomous mobile body in 1st Example of this invention. 本発明の第1の実施例における,システム全体構成を示す図である。It is a figure which shows the whole system configuration in the 1st Example of this invention. 本発明の第1の実施例における,コントローラ2の演算フローを示すフロー図である。It is a flow diagram which shows the calculation flow of the controller 2 in the 1st Example of this invention. 本発明の第1の実施例における,物体検出の例を示す図である。It is a figure which shows the example of the object detection in the 1st Example of this invention. 本発明の第1の実施例における,画像認識処理33の学習方式を示す図である。It is a figure which shows the learning method of the image recognition process 33 in the 1st Example of this invention. 本発明の第1の実施例における,センサ正常時の画像認識処理33の処理例を示す図である。It is a figure which shows the processing example of the image recognition processing 33 when the sensor is normal in the 1st Example of this invention. 本発明の第1の実施例における,センサA利用不能状況を含めた学習方式を示す図である。It is a figure which shows the learning method including the sensor A unavailability situation in the 1st Example of this invention. 本発明の第1の実施例における,センサB利用不能状況を含めた学習方式を示す図である。It is a figure which shows the learning method including the sensor B unavailability situation in the 1st Example of this invention. 本発明の第1の実施例における,センサ正常/異常時を含む画像認識処理33処理構成を示す図である。It is a figure which shows the image recognition processing 33 processing configuration including the sensor normal / abnormal time in 1st Example of this invention. 本発明の第1の実施例における,固定化データを用いない際の処理結果例を示す図である。It is a figure which shows the processing result example when the fixed data is not used in 1st Example of this invention. 本発明の第1の実施例における,システム全体構成の変形例を示す図である。It is a figure which shows the modification of the whole system configuration in 1st Example of this invention.

以下,本発明に係る第1の実施形態について,図面を用いて説明する。
図1に,本実施例が対象とする自律移動体1の構成イメージを上面図にて示す。自律移動体1は,自身が備えるセンサ群2を用いて周辺状況を判断しながら自律移動を行うものとする。なお,本図面においてはセンサ群2のうち自律移動体1の前方を監視する2つのカメラシステム(カメラシステムA21およびカメラシステムB22)のみを図示するものとし,他に備えられる,左右側面および後方を監視するカメラシステムについては図示していない。
Hereinafter, the first embodiment according to the present invention will be described with reference to the drawings.
FIG. 1 is a top view showing a configuration image of the autonomous mobile body 1 targeted by this embodiment. It is assumed that the autonomous mobile body 1 performs autonomous movement while judging the surrounding situation by using the sensor group 2 provided by the autonomous mobile body 1. In this drawing, only two camera systems (camera system A21 and camera system B22) that monitor the front of the autonomous moving body 1 in the sensor group 2 are shown, and the left and right side surfaces and the rear side provided in the other are shown. The camera system to be monitored is not shown.

図2に,自律移動体1のシステム構成を示す。自律移動体1は,周辺環境を認識するセンサ群2,センサ群2から入力されるセンサ観測情報に基づき周辺環境認識を行い自律移動体の動作決定を行うコントローラ3,およびコントローラ3から与えられる制御指令値に基づき駆動される1つ以上のアクチュエータを含むアクチュエータ群4,を備える。 FIG. 2 shows the system configuration of the autonomous mobile body 1. The autonomous moving body 1 is a control given by the controllers 3 and 3 that recognize the surrounding environment based on the sensor observation information input from the sensor group 2 and the sensor group 2 that recognize the surrounding environment and determine the operation of the autonomous moving body. The actuator group 4 includes one or more actuators driven based on a command value.

センサ群2は,カメラシステムA21およびカメラシステムB22の2つのカメラを備えるものとし,カメラシステムA21およびカメラシステムB22には,各センサに発生した異常や故障を検知し,コントローラへ通知を行う異常検出機能が付加されているものとする。異常検出機能は例えばセンサの物理故障(破損や断線等)に加え,センサ特性に応じた失陥状況(カメラシステムにおいては,逆光による認識不良等)を検出し,センサ異常信号を出力できるものとする。 The sensor group 2 includes two cameras, a camera system A21 and a camera system B22, and the camera system A21 and the camera system B22 detect an abnormality or failure that has occurred in each sensor and notify the controller of the abnormality detection. It is assumed that the function is added. For example, the abnormality detection function can detect a physical failure of the sensor (damage, disconnection, etc.) and a failure situation according to the sensor characteristics (recognition failure due to backlight in the camera system, etc.) and output a sensor abnormality signal. To do.

なお,本実施例では簡単のため,センサ群2がカメラシステムA21およびカメラシステムB22の2つのセンサにより構成されている例を示しているが,本発明の要件はこの限りではなく,任意のセンサを任意の数だけ用いることができるものとする。 In this embodiment, for the sake of simplicity, an example in which the sensor group 2 is composed of two sensors, a camera system A21 and a camera system B22, is shown, but the requirements of the present invention are not limited to this, and any sensor can be used. Can be used in any number.

コントローラ3は,カメラシステムA21から入力されるセンサ観測情報を前処理するセンサA前処理31と,カメラシステムB22から入力される観測情報を前処理するセンサB前処理32と,センサ入力データに基づき周辺認識を行う画像認識処理33と,画像認識結果に基づき自立移動体の移動先を決定する移動制御処理34とを,自律移動体1の移動に関する制御演算として実行する。また,コントローラ3は,これら移動に関する制御演算以外に,カメラシステムA21もしくはカメラシステムB22からセンサ異常信号が出力された際の対応機能もしくは利用情報としての固定化データA35および固定化データB36と,画像認識処理33への入力信号をセンサA前処理31か固定化データ35かに切り替えるセレクタA37と,画像認識処理33への入力信号をセンサB前処理32か固定化データ36かに切り替えるセレクタB38と,を備える。 The controller 3 is based on the sensor A preprocessing 31 that preprocesses the sensor observation information input from the camera system A21, the sensor B preprocessing 32 that preprocesses the observation information input from the camera system B22, and the sensor input data. The image recognition process 33 that performs peripheral recognition and the movement control process 34 that determines the movement destination of the independent moving body based on the image recognition result are executed as control operations related to the movement of the autonomous moving body 1. In addition to the control calculations related to these movements, the controller 3 also includes fixed data A35 and fixed data B36 as corresponding functions or usage information when a sensor abnormality signal is output from the camera system A21 or the camera system B22, and an image. Selector A37 that switches the input signal to the recognition process 33 to sensor A preprocessing 31 or fixed data 35, and selector B38 that switches the input signal to image recognition process 33 to sensor B preprocessing 32 or fixed data 36. , Equipped with.

固定化データA35および固定化データB36は,カメラシステムA21およびカメラシステムB22から異常が出力された際,それらに対応する前処理結果の代替として用いられるものであり,本実施例ではそれぞれセンサA前処理結果31,センサB前処理結果32の出力と同一の記述形式を持つが,各前処理結果がシステム上出力し得ない値(無効値)を取るものとする。すなわち,各前処理結果が一定の値域(整数値で0~254)と出力される場合において,固定化データA35および固定化データB36は全ての値が255で埋められたデータであるとする。なお,固定化データA35および固定化データB36の値は,必ずしも無効値である必要はなく,仕様として規定された固定値や変数値を取ることも可能である。 The fixed data A35 and the fixed data B36 are used as substitutes for the preprocessing results corresponding to the abnormalities output from the camera system A21 and the camera system B22, and in this embodiment, they are in front of the sensor A, respectively. It has the same description format as the output of the processing result 31 and the sensor B preprocessing result 32, but each preprocessing result takes a value (invalid value) that cannot be output on the system. That is, when each preprocessing result is output in a certain range (0 to 254 as an integer value), it is assumed that the fixed data A35 and the fixed data B36 are data in which all the values are filled with 255. The values of the fixed data A35 and the fixed data B36 do not necessarily have to be invalid values, and fixed values and variable values specified as specifications can be taken.

以上の構成により実現される,自律移動体1のコントローラ3における制御フローを図3に示す。カメラシステムA21およびカメラシステムB22からセンサデータを取り込む(S101)。次に,カメラシステムA21の異常検出機能23,カメラシステムB22の異常検出機能24からのセンサ異常信号を取り込み(S102),正常であった場合にはセンサデータの前処理(センサA前処理31およびセンサB前処理32)を実施し,結果を画像認識処理33に送信する(S1031)。異常があった場合は,異常があったセンサ側のセレクタ(セレクタA37,セレクタB38の該当する方)を用いて,対応するセンサの固定化データを選択し,画像認識処理33に送信する(S1032)。これを自律移動体1に接続されているすべてのセンサについて繰り返し(S104),全センサデータの入力完了を確認の上で画像認識処理33を実施する(S105)。最後に,周辺環境の認識結果に基づき,自律移動体1の移動制御処理36を行い(S106),アクチュエータ指令値をアクチュエータ群4に送信(S107)して,コントローラ2の処理を終了する。 FIG. 3 shows a control flow in the controller 3 of the autonomous mobile body 1 realized by the above configuration. Sensor data is fetched from the camera system A21 and the camera system B22 (S101). Next, the sensor abnormality signal from the abnormality detection function 23 of the camera system A21 and the abnormality detection function 24 of the camera system B22 is captured (S102), and if normal, the sensor data is preprocessed (sensor A preprocessing 31 and). The sensor B preprocessing 32) is performed, and the result is transmitted to the image recognition process 33 (S1031). If there is an abnormality, the fixed data of the corresponding sensor is selected by using the selector on the sensor side where the abnormality occurred (the corresponding one of selector A37 and selector B38) and transmitted to the image recognition process 33 (S1032). ). This is repeated for all the sensors connected to the autonomous mobile body 1 (S104), and the image recognition process 33 is performed after confirming the completion of input of all the sensor data (S105). Finally, based on the recognition result of the surrounding environment, the movement control process 36 of the autonomous moving body 1 is performed (S106), the actuator command value is transmitted to the actuator group 4 (S107), and the process of the controller 2 is completed.

以上の手順により,自律移動体1の制御が行われ,センサ群2およびコントローラ3により周辺状況を認識しつつ,アクチュエータ群4が駆動される。本実施例では,図4に示すような,自律移動体1が周囲に1つのオブジェクト(人物A5)が共存するような状況下で移動するケースを例に,学習方法を示す。 By the above procedure, the autonomous mobile body 1 is controlled, and the actuator group 4 is driven while the sensor group 2 and the controller 3 recognize the surrounding situation. In this embodiment, a learning method is shown by taking as an example a case where the autonomous moving body 1 moves in a situation where one object (person A5) coexists around it as shown in FIG.

以下,図5から図10を用いて,機械学習を含む画像認識処理33について,その学習方式の例を示す。本実施例では,画像認識処理33は事前に計算機上で学習されたCNN(Convolutional Neural Network:畳み込みニューラルネットワーク)を含む構成とし,自律移動体1に搭載されたカメラシステムA21およびカメラシステムB22の入力に基づき,物***置を出力するものとする。 Hereinafter, with reference to FIGS. 5 to 10, an example of a learning method for the image recognition process 33 including machine learning will be shown. In this embodiment, the image recognition process 33 has a configuration including a CNN (Convolutional Neural Network) learned in advance on a computer, and inputs of the camera system A21 and the camera system B22 mounted on the autonomous moving body 1. The object position shall be output based on.

一般に教師有り学習においては,教師データとして前述の入力および出力のペアである学習用データセット6を与え,このデータセットを模擬することができるようパラメータチューニングが行われる。 Generally, in supervised learning, a learning data set 6 which is a pair of input and output described above is given as teacher data, and parameter tuning is performed so that this data set can be simulated.

図4に示すケースでは,自律移動体1の右斜め前方に人物Aが位置しており,図5に示すように,カメラシステムA21およびカメラシステムB22の取得画像を,各々前処理した結果である,センサA前処理結果63,およびセンサB前処理結果64を正常時入力データ61とし,物体検出位置を出力データ62として学習を行う。 In the case shown in FIG. 4, the person A is located diagonally to the right of the autonomous moving body 1, and as shown in FIG. 5, it is the result of preprocessing the acquired images of the camera system A21 and the camera system B22, respectively. , Sensor A preprocessing result 63 and sensor B preprocessing result 64 are used as normal input data 61, and the object detection position is used as output data 62 for learning.

これにより,図6のようにカメラシステムA21およびカメラシステムB22が正常に動作している際には,学習結果を含む画像認識処理33により,物***置を正しく検出することができる。 As a result, when the camera system A21 and the camera system B22 are operating normally as shown in FIG. 6, the object position can be correctly detected by the image recognition process 33 including the learning result.

本方式では更に,画像認識処理33の学習に当たり,図7および図8のような追加データセットを用いて学習を行う。すなわち,前述の正常時入力データ61と出力データ62のセットに加え,センサA異常時入力データ65(カメラシステムA21への異常発生を想定し,センサA前処理結果63代替として固定化データA35を利用)と出力データ62のセット,センサB異常時入力データ66(カメラシステムB22への異常発生を想定し,センサB前処理結果64代替として固定化データB36を利用)と出力データ62のセット,を用い,学習を行う。 In this method, further, in learning the image recognition process 33, learning is performed using the additional data sets as shown in FIGS. 7 and 8. That is, in addition to the above-mentioned set of normal input data 61 and output data 62, the fixed data A35 is used as a substitute for the sensor A preprocessing result 63, assuming that an abnormality occurs in the sensor A abnormality input data 65 (assuming an abnormality occurs in the camera system A21). Use) and output data 62 set, sensor B error input data 66 (assuming an abnormality occurs in camera system B22, fixed data B36 is used as a substitute for sensor B preprocessing result 64) and output data 62 set, To learn using.

これにより,図9に示すように,カメラシステムB22に逆行が発生しカメラ認識性能が低下している際にも,事前に学習されたセンサB異常時入力データ66に対応する出力(に近しい出力)を行うことが可能となり,物体検出位置の精度をある程度保つことが可能となる。本実施例では,カメラシステムB22の欠落により物体検出位置が2マス大きくなる例を示している。 As a result, as shown in FIG. 9, even when the camera system B22 is retrograde and the camera recognition performance is deteriorated, the output corresponding to the pre-learned sensor B abnormal input data 66 (output close to) ), And it is possible to maintain the accuracy of the object detection position to some extent. In this embodiment, an example is shown in which the object detection position is increased by 2 squares due to the lack of the camera system B22.

一方で,本方式を用いない場合,同様の逆光を想定すると,図10に示すように認識異常を起こしているカメラシステムB22の入力を用いて画像認識処理33を行うため,正常時データセット(正常時入力データ61と出力データ62の対応)のみに依って学習された画像認識処理33では,物***置に誤り,すなわち物体の誤検出,検出漏れを発生してしまい,自律移動体1の移動停止や衝突事故の可能性が大きくなってしまう虞がある。 On the other hand, when this method is not used, assuming the same backlight, the image recognition process 33 is performed using the input of the camera system B22 causing the recognition abnormality as shown in FIG. In the image recognition process 33 learned only by the correspondence between the normal input data 61 and the output data 62), an error occurs in the object position, that is, an erroneous detection of the object and a detection omission occur, and the autonomous moving body 1 moves. There is a risk that the possibility of a stop or collision will increase.

本実施例によれば,センサ異常検出機能23および24と,事前に定義し学習された固定化データA35および固定化データB36と,セレクタA37およびセレクタB38を用いることで,逆行や故障等によるセンサ利用不能状況(異常時における周辺認識動作の信頼性を一定水準に保つことが可能となり,自律移動体1の動作信頼性を向上することが可能となる。 According to this embodiment, by using the sensor abnormality detection functions 23 and 24, the fixed data A35 and the fixed data B36 defined and learned in advance, and the selector A37 and the selector B38, the sensor due to retrograde or failure, etc. It becomes possible to maintain the reliability of the peripheral recognition operation at a certain level in an unusable situation (when an abnormality occurs, and improve the operation reliability of the autonomous moving body 1.

本発明は、以上説明した実施形態に限定されるものではなく、様々な変形例が含まれる。例えば,自律移動体1およびコントローラ3の機能構成についても,今回示した例には限らない。図11に示すように,センサAおよびセンサBの異常検出機能を,コントローラ3内部に実装することも可能である。センサA異常検出処理391,センサB異常検出処理392は,例えば各センサの特性に基づき,直前時刻からの連続性の途絶え,すなわちカメラシステムにおける大幅な照度変化,レーダやLidar処理における位置座標の飛び等によれば,十分にセンサ異常やセンサ利用不能状況の検出が検出可能である。その他,一度画像認識処理33を行い,前回の画像認識処理33の結果からの乖離によっても判定可能であり,その際にはセンサA異常状態,センサB異常状態の両者を仮定して,固定化データA35や固定化データB36を用いた処理をそれぞれ行い,乖離の少ない結果を画像認識処理33の出力とすることも考えらえる。また、センサ固定値A,センサ固定値BをそれぞれセンサA,センサBに保存しておき,自身が異常であると判断した場合にセンサ出力を固定値とする方式であってもよい。また、自身が異常であると判断した場合にセンサ出力を停止してもよい。自立移動体1の周辺を認識する周辺認識センサとしてカメラを例に示したが、LidarやRadarなども例として挙げられる。 The present invention is not limited to the embodiments described above, and includes various modifications. For example, the functional configurations of the autonomous mobile body 1 and the controller 3 are not limited to the examples shown this time. As shown in FIG. 11, it is also possible to implement the abnormality detection functions of the sensor A and the sensor B inside the controller 3. The sensor A abnormality detection process 391 and the sensor B abnormality detection process 392 are, for example, based on the characteristics of each sensor, the continuity is interrupted from the immediately preceding time, that is, a large illuminance change in the camera system, and position coordinate jumps in radar and lidar processing. According to the above, it is possible to sufficiently detect sensor abnormalities and sensor unavailability status. In addition, the image recognition process 33 is performed once, and the determination can be made based on the deviation from the result of the previous image recognition process 33. In that case, both the sensor A abnormal state and the sensor B abnormal state are assumed and fixed. It is also conceivable to perform processing using the data A35 and the fixed data B36, respectively, and output the result with little deviation as the output of the image recognition processing 33. Further, the sensor fixed value A and the sensor fixed value B may be stored in the sensor A and the sensor B, respectively, and the sensor output may be set as a fixed value when it is determined that the sensor is abnormal. Further, the sensor output may be stopped when it is determined that the sensor is abnormal. A camera is shown as an example as a peripheral recognition sensor that recognizes the periphery of the self-supporting moving body 1, but Lidar, Radar, and the like can also be mentioned as examples.

1:自律移動体,2:センサ群,21:カメラシステムA,22:カメラシステムB,
23:カメラシステムAの異常検出機能,24:カメラシステムBの異常検出機能
3:コントローラ,31:センサA前処理,32:センサB前処理,
33:画像認識処理,34:移動制御処理,35:固定化データA,
36:固定化データB,37:セレクタA,38:セレクタB,
391:センサA異常検出処理,392:センサB異常検出処理,
4:アクチュエータ群,
5:人物A,6:学習データ,61:正常時入力データ,62:出力データ
63:センサA前処理結果,64:センサB前処理結果,
65:センサA異常時入力データ,66:センサB異常時入力データ
1: Autonomous mobile body, 2: Sensor group, 21: Camera system A, 22: Camera system B,
23: Camera system A abnormality detection function, 24: Camera system B abnormality detection function 3: Controller, 31: Sensor A preprocessing, 32: Sensor B preprocessing,
33: Image recognition processing, 34: Movement control processing, 35: Fixed data A,
36: Fixed data B, 37: Selector A, 38: Selector B,
391: Sensor A abnormality detection processing, 392: Sensor B abnormality detection processing,
4: Actuator group,
5: Person A, 6: Learning data, 61: Normal input data, 62: Output data 63: Sensor A preprocessing result, 64: Sensor B preprocessing result,
65: Input data when sensor A is abnormal, 66: Input data when sensor B is abnormal

Claims (16)

複数の周辺認識センサと,
前記複数の周辺認識センサからの入力情報を学習および処理するデータ処理部を有する制御装置と,を備え,
前記データ処理部は,センサ異常時のデータを使用した学習を行い,
前記周辺認識センサの異常時に,学習に用いたセンサ異常時のデータを前記データ処理部に出力する手段を備える制御システム
With multiple peripheral recognition sensors,
A control device having a data processing unit that learns and processes input information from the plurality of peripheral recognition sensors is provided.
The data processing unit performs learning using the data at the time of sensor abnormality, and performs learning.
A control system provided with means for outputting the data at the time of sensor abnormality used for learning to the data processing unit when the peripheral recognition sensor is abnormal.
前記センサ異常時のデータは無効値である請求項1に記載の制御システム The control system according to claim 1, wherein the data at the time of sensor abnormality is an invalid value. 前記制御装置は,
前記センサからの入力を前処理する前処理部と,
前記センサ異常時のデータを固定値として記憶する記憶部と,
前記センサに異常が発生した場合,前記データ処理部への出力を固定値へ切り替えるセレクタと,を備える請求項1または2に記載の制御システム
The control device is
A pre-processing unit that pre-processes the input from the sensor and
A storage unit that stores data when the sensor is abnormal as a fixed value,
The control system according to claim 1 or 2, further comprising a selector for switching the output to the data processing unit to a fixed value when an abnormality occurs in the sensor.
前記複数の周辺認識センサは,異常時にセンサの出力を固定化する手段を備える請求項3に記載の制御システム The control system according to claim 3, wherein the plurality of peripheral recognition sensors include means for fixing the output of the sensor in the event of an abnormality. センサの出力を0に固定する請求項4に記載の制御システム The control system according to claim 4, wherein the output of the sensor is fixed to 0. センサの出力を1に固定する請求項4に記載の制御システム The control system according to claim 4, wherein the output of the sensor is fixed to 1. センサの出力を無効値に固定する手段を備える請求項1に記載の制御システム The control system according to claim 1, further comprising means for fixing the output of the sensor to an invalid value. センサの出力を停止する手段を備える請求項1に記載の制御システム The control system according to claim 1, further comprising means for stopping the output of the sensor. 前記センサが自身のセンシングデータから故障を検知する手段を備える請求項1から請求項8に記載の制御システム The control system according to claim 1 to 8, wherein the sensor includes means for detecting a failure from its own sensing data. センサの出力を取得し、センサの故障を検知する手段を備える請求項1から請求項8に記載の制御システム The control system according to claim 1 to 8, further comprising means for acquiring the output of the sensor and detecting the failure of the sensor. 入力されたセンサ値を制御装置内で検査し、センサの故障を検知する手段を備える請求項1から請求項8に記載の制御システム The control system according to claim 1 to 8, further comprising means for inspecting the input sensor value in the control device and detecting a sensor failure. 複数の周辺認識センサから信号が入力される制御装置において,
前記複数の周辺認識センサからの入力情報を学習および処理するデータ処理部と,を備え,
前記データ処理部は,センサ異常時のデータを使用した学習を行い,
前記周辺認識センサの異常時に,学習に用いたセンサ異常時のデータを前記データ処理部に出力する出力部を備える制御装置
In a control device in which signals are input from multiple peripheral recognition sensors
It is provided with a data processing unit that learns and processes input information from the plurality of peripheral recognition sensors.
The data processing unit performs learning using the data at the time of sensor abnormality, and performs learning.
A control device including an output unit that outputs data at the time of sensor abnormality used for learning to the data processing unit when the peripheral recognition sensor is abnormal.
前記出力部は,セレクタと前記データを記憶する記憶部とを備える請求項12に記載の制御装置 The control device according to claim 12, wherein the output unit includes a selector and a storage unit that stores the data. 前記データは無効値である請求項12または13に記載の制御装置 The control device according to claim 12 or 13, wherein the data is an invalid value. 制御装置にデータを出力する周辺認識センサにおいて,
異常時に,前記制御装置が学習で用いた異常時データを固定値として出力する周辺認識センサ
In the peripheral recognition sensor that outputs data to the control device
Peripheral recognition sensor that outputs the abnormal time data used in learning by the control device as a fixed value in the event of an abnormality
前記固定値は無効値である請求項15に記載の周辺認識センサ The peripheral recognition sensor according to claim 15, wherein the fixed value is an invalid value.
JP2019076767A 2019-04-15 2019-04-15 Image processing method using machine learning and electronic control device using it Pending JP2020173744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019076767A JP2020173744A (en) 2019-04-15 2019-04-15 Image processing method using machine learning and electronic control device using it

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2019076767A JP2020173744A (en) 2019-04-15 2019-04-15 Image processing method using machine learning and electronic control device using it

Publications (1)

Publication Number Publication Date
JP2020173744A true JP2020173744A (en) 2020-10-22

Family

ID=72831665

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2019076767A Pending JP2020173744A (en) 2019-04-15 2019-04-15 Image processing method using machine learning and electronic control device using it

Country Status (1)

Country Link
JP (1) JP2020173744A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021123234A (en) * 2020-02-05 2021-08-30 マツダ株式会社 Vehicle control system
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
JP7446921B2 (en) 2020-05-29 2024-03-11 株式会社東芝 Moving object, distance measurement method, and distance measurement program
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US12020476B2 (en) 2017-03-23 2024-06-25 Tesla, Inc. Data synthesis for autonomous control systems
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
JP2021123234A (en) * 2020-02-05 2021-08-30 マツダ株式会社 Vehicle control system
JP7446921B2 (en) 2020-05-29 2024-03-11 株式会社東芝 Moving object, distance measurement method, and distance measurement program

Similar Documents

Publication Publication Date Title
JP2020173744A (en) Image processing method using machine learning and electronic control device using it
JP6820981B2 (en) Autonomous driving system, vehicle control method and equipment
JP2020524353A (en) Device and method for controlling a vehicle module in response to a status signal
JP7089026B2 (en) Devices and methods for controlling vehicle modules
Tarapore et al. Fault detection in a swarm of physical robots based on behavioral outlier detection
WO2006121483A2 (en) Generic software fault mitigation
Yang et al. Fault-tolerant system design of an autonomous underwater vehicle ODIN: An experimental study
US20230367296A1 (en) Industrial system, abnormality detection system, and abnormality detection method
CN111279358A (en) Method and system for operating a vehicle
KR20210061875A (en) Method for detecting defects in the 3d lidar sensor using point cloud data
JP2024096877A (en) Safety system and method for use in robotic operation - Patents.com
KR20220068799A (en) System for detecting error of automation equipment and method thereof
Ji et al. Supervisory fault adaptive control of a mobile robot and its application in sensor-fault accommodation
CN115114015A (en) Distributed system and diagnostic method
JPWO2019131003A1 (en) Vehicle control device and electronic control system
JP7078174B2 (en) Robot controls, methods, and programs
Buchholz et al. Towards adaptive worker assistance in monitoring tasks
Soika Grid based fault detection and calibration of sensors on mobile robots
JP2007233573A (en) Electronic controller
US20220105633A1 (en) Integrity and safety checking for robots
EP4064055A1 (en) Functional safety with root of safety and chain of safety
JP4328969B2 (en) Diagnosis method of control device
Christensen et al. Exogenous fault detection in a collective robotic task
JP4577607B2 (en) Robot control device and robot system
CN116749196B (en) Multi-axis mechanical arm collision detection system and method and mechanical arm