TWI770420B - Vehicle accident identification method and device, electronic equipment - Google Patents

Vehicle accident identification method and device, electronic equipment Download PDF

Info

Publication number
TWI770420B
TWI770420B TW108133384A TW108133384A TWI770420B TW I770420 B TWI770420 B TW I770420B TW 108133384 A TW108133384 A TW 108133384A TW 108133384 A TW108133384 A TW 108133384A TW I770420 B TWI770420 B TW I770420B
Authority
TW
Taiwan
Prior art keywords
shooting
image data
vehicle accident
accident
orientation
Prior art date
Application number
TW108133384A
Other languages
Chinese (zh)
Other versions
TW202034270A (en
Inventor
周凡
Original Assignee
開曼群島商創新先進技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 開曼群島商創新先進技術有限公司 filed Critical 開曼群島商創新先進技術有限公司
Publication of TW202034270A publication Critical patent/TW202034270A/en
Application granted granted Critical
Publication of TWI770420B publication Critical patent/TWI770420B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00204Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
    • H04N1/00244Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server with a server, e.g. an internet server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00249Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector
    • H04N1/00251Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector with an apparatus for taking photographic images, e.g. a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)
  • Burglar Alarm Systems (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

本說明書一個或多個實施例提供一種車輛事故的鑑定方法及裝置、電子設備,該方法可以包括:獲取車輛事故現場的圖像資料;確定鑑定結果,所述鑑定結果是藉由將所述圖像資料輸入事故鑑定模型而得到的輸出結果得到;所述事故鑑定模型由歷史車輛事故現場的圖像資料,以及所述歷史車輛事故現場的事故鑑定資訊訓練得到。One or more embodiments of the present specification provide a vehicle accident identification method, device, and electronic device. The method may include: acquiring image data of a vehicle accident scene; determining an identification result, the identification result being obtained by The output result obtained by inputting the data into the accident identification model is obtained; the accident identification model is obtained by training the image data of the historical vehicle accident scene and the accident identification information of the historical vehicle accident scene.

Description

車輛事故的鑑定方法及裝置、電子設備Vehicle accident identification method and device, electronic equipment

本說明書一個或多個實施例涉及通訊技術領域,尤其涉及一種車輛事故的鑑定方法及裝置、電子設備。One or more embodiments of this specification relate to the field of communication technologies, and in particular, to a method and device for identifying vehicle accidents, and electronic equipment.

在車輛發生事故後,保險公司的定損員和交通警察通常需要人工對現場進行勘察,以及對事故當事人陳述的事故經過進行核實,從而對車輛事故進行鑑定。在相關技術中,對於車輛事故的鑑定,主要依賴於儀器測量、影片重播和人工判斷等方式。After a vehicle accident, the insurance company's loss assessor and traffic police usually need to manually inspect the scene and verify the accident stated by the parties involved in the accident, so as to identify the vehicle accident. In the related art, the identification of vehicle accidents mainly relies on methods such as instrument measurement, movie replay and manual judgment.

有鑑於此,本說明書一個或多個實施例提供一種車輛事故的鑑定方法及裝置、電子設備。 為實現上述目的,本說明書一個或多個實施例提供技術方案如下: 根據本說明書一個或多個實施例的第一方面,提出了一種車輛事故的鑑定方法,包括: 獲取車輛事故現場的圖像資料; 確定鑑定結果,所述鑑定結果是藉由將所述圖像資料輸入事故鑑定模型而得到的輸出結果得到;所述事故鑑定模型由歷史車輛事故現場的圖像資料,以及所述歷史車輛事故現場的事故鑑定資訊訓練得到。 根據本說明書一個或多個實施例的第二方面,提出了一種車輛事故的鑑定裝置,包括: 圖像獲取單元,獲取車輛事故現場的圖像資料; 結果確定單元,確定鑑定結果,所述鑑定結果是藉由將所述圖像資料輸入事故鑑定模型而得到的輸出結果得到;所述事故鑑定模型由歷史車輛事故現場的圖像資料,以及所述歷史車輛事故現場的事故鑑定資訊訓練得到。 根據本說明書一個或多個實施例的第三方面,提出了一種電子設備,包括: 處理器; 用於儲存處理器可執行指令的儲存器; 其中,所述處理器藉由運行所述可執行指令以實現如上述任一實施例中所述的車輛事故的鑑定方法。In view of this, one or more embodiments of the present specification provide a vehicle accident identification method and device, and an electronic device. To achieve the above purpose, one or more embodiments of this specification provide the following technical solutions: According to a first aspect of one or more embodiments of the present specification, a method for identifying a vehicle accident is proposed, including: Obtain image data of vehicle accident scene; Determine the identification result, the identification result is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is obtained from the image data of the historical vehicle accident scene, and the historical vehicle accident scene The accident identification information training is obtained. According to a second aspect of one or more embodiments of the present specification, a vehicle accident identification device is provided, including: The image acquisition unit acquires the image data of the vehicle accident scene; The result determination unit determines the identification result, the identification result is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is obtained from the image data of the historical vehicle accident scene, and the The accident identification information training at the historical vehicle accident scene is obtained. According to a third aspect of one or more embodiments of the present specification, an electronic device is proposed, including: processor; memory for storing processor-executable instructions; Wherein, the processor implements the method for identifying a vehicle accident as described in any of the above embodiments by running the executable instructions.

這裡將詳細地對示例性實施例進行說明,其示例表示在圖式中。下面的描述涉及圖式時,除非另有表示,不同圖式中的相同數字表示相同或相似的要素。以下示例性實施例中所描述的實施方式並不代表與本說明書一個或多個實施例相一致的所有實施方式。相反,它們僅是與如所附申請專利範圍中所詳述的、本說明書一個或多個實施例的一些方面相一致的裝置和方法的例子。 需要說明的是:在其他實施例中並不一定按照本說明書示出和描述的順序來執行相應方法的步驟。在一些其他實施例中,其方法所包括的步驟可以比本說明書所描述的更多或更少。此外,本說明書中所描述的單個步驟,在其他實施例中可能被分解為多個步驟進行描述;而本說明書中所描述的多個步驟,在其他實施例中也可能被合併為單個步驟進行描述。 圖1是一示例性實施例提供的一種車輛事故的鑑定系統的架構示意圖。如圖1所示,該系統可以包括伺服器11、網路12、若干圖像採集設備,比如手機13、手機14、行車記錄器15和行車記錄器16等。 伺服器11可以為包含一獨立主機的物理伺服器,或者該伺服器11可以為主機集群承載的虛擬伺服器。在運行過程中,伺服器11可以運行某一應用的伺服器側的程式,以實現該應用的相關業務功能。而在本說明書一個或多個實施例的技術方案中,可由伺服器11作為伺服端與手機13-14、行車記錄器15-16上運行的客戶端進行配合,以實現車輛事故的鑑定方案。 手機13-14、行車記錄器15-16只是用戶可以使用的一種類型的圖像採集設備。實際上,用戶顯然還可以使用諸如下述類型的圖像採集設備:平板設備、筆記型電腦、個人數位助理(PDAs,Personal Digital Assistants)、可穿戴設備(如智慧型眼鏡、智慧型手錶等)等,本說明書一個或多個實施例並不對此進行限制。在運行過程中,該圖像採集設備可以運行某一應用的客戶端側的程式,以實現該應用的相關業務功能,比如圖像採集設備可作為客戶端與伺服器11進行交互,以實現本說明書中的車輛事故的鑑定方案。 而對於手機13-14、行車記錄器15-16與伺服器11之間進行交互的網路12,可以包括多種類型的有線或無線網路。在一實施例中,該網路12可以包括公共交換電話網路(Public Switched Telephone Network,PSTN)和網際網路。 下面分別針對客戶端和伺服端中的不同角色,對本說明書的車輛事故的鑑定方案進行說明。 請參見圖2,圖2是一示例性實施例提供的一種車輛事故的鑑定方法的流程圖。如圖2所示,該方法應用於客戶端,可以包括以下步驟: 步驟202,獲取車輛事故現場的圖像資料。 步驟204,確定鑑定結果,所述鑑定結果是藉由將所述圖像資料輸入事故鑑定模型而得到的輸出結果得到;所述事故鑑定模型由歷史車輛事故現場的圖像資料,以及所述歷史車輛事故現場的事故鑑定資訊訓練得到。 在一實施例中,在發生車輛事故後,用戶(例如,發生車輛事故的司機、交通警察、保險公司的定損員等)可使用客戶端(配置有攝影模組的圖像採集設備,可與伺服器進行通訊;比如手機、行車記錄器等)拍攝車輛事故現場的圖像資料(比如照片、影片等),從而可將拍攝得到的圖像資料作為事故鑑定模型的輸入,以由事故鑑定模型輸出鑑定結果。藉由上述採用機器學習模型的方式來鑑定車輛事故,使得用戶可直接利用車輛事故現場的照片、影片便可進行端到端的車輛事故鑑定,可有效提高鑑定效率,縮短鑑定週期。同時,本說明書的車輛事故的鑑定方案支持遠端鑑定和自動鑑定,從而大幅降低了車輛事故的鑑定成本。例如,在發生車輛事故後,司機只需要藉由客戶端採集車輛事故現場的圖像資料,基於本說明書的車輛事故的鑑定方案便可得到鑑定結果,而無需定損員到車輛事故現場來勘察,司機和交通警察也可儘快地處理車輛事故。 在一實施例中,事故鑑定模型可配置於客戶端側,那麼客戶端可直接將所述圖像資料輸入所述事故鑑定模型,以將所述事故鑑定模型的輸出結果作為所述鑑定結果。 在一實施例中,事故鑑定模型可配置於伺服端側,那麼客戶端可將所述圖像資料發送至伺服端以使所述伺服端將所述圖像資料輸入所述事故鑑定模型,以及將所述伺服端返回的輸出結果作為所述鑑定結果。 在一實施例中,車輛事故現場的圖像資料是鑑定車輛事故的依據(即為事故鑑定模型的輸入),而該圖像資料需要用戶使用客戶端來拍攝得到。因此,需引導用戶拍攝得到能夠準確反映出車輛事故現場的圖像資料。進一步的,可在圖像採集設備(即客戶端)的拍攝介面中展示引導資訊,從而引導用戶拍攝得到正確的圖像資料。 在一種情況下,可預先定義車輛事故現場與圖像採集設備之間的標準相對位置關係;換言之,圖像採集設備在保持與車輛事故現場之間的相對位置關係為標準相對位置關係時,可拍攝得到能夠正確反映出車輛事故現場的圖像資料(可理解為包含車輛事故現場的各個細節)。因此,可依據車輛事故現場與圖像採集設備之間的相對位置關係來引導用戶移動圖像採集設備。作為一示例性實施例,可先根據圖像資料(圖像採集設備獲取到的車輛事故現場的圖像資料;例如,可以是用戶初始拍攝車輛事故現場得到的首張照片)確定所述車輛事故現場與圖像採集設備之間的初始相對位置關係,再確定所述圖像採集設備的移動狀態,從而基於所述移動狀態和所述初始相對位置關係,確定所述圖像採集設備移動後與所述車輛事故現場的即時相對位置關係;那麼,可根據所述即時相對位置關係,在所述圖像採集設備的拍攝介面中展示第一引導資訊,以引導用戶將所述圖像採集設備移動至與標準相對位置關係相匹配的位置。可見,在確定出初始相對位置關係後,無需再根據圖像採集設備拍攝到的圖像資料來引導用戶(基於圖像採集設備的移動狀態即可),即在移動過程中,引導操作可基於圖像採集設備的移動狀態來完成,而無需依賴於圖像採集設備在移動時拍攝的圖像資料。 在另一種情況下,可預先定義圖像採集設備對車輛事故現場的標準拍攝方位;換言之,圖像採集設備在保持處於對車輛事故現場的標準拍攝方位時,可拍攝得到能夠正確反映出車輛事故現場的圖像資料。因此,可依據圖像採集設備對車輛事故現場的標準拍攝方位來引導用戶移動圖像採集設備。作為一示例性實施例,可先獲取圖像採集設備對車輛事故現場的拍攝方位(例如,可以是用戶初始使用圖像採集設備拍攝車輛事故現場時的拍攝方位),再確定所述拍攝方位是否符合標準拍攝方位;當所述拍攝方位不符合標準拍攝方位時,在所述圖像採集設備的拍攝介面中展示第二引導資訊,以引導用戶將所述圖像採集設備移動至所述標準拍攝方位處。 在一實施例中,獲取圖像採集設備對車輛事故現場的拍攝方位(例如,包括圖像採集設備與車輛事故現場之間的距離、角度等參數)的操作,可利用機器學習模型來完成。例如,可獲取所述圖像採集設備拍攝所述車輛事故現場得到的即時圖像資料,再將所述即時圖像資料輸入拍攝方位確定模型(所述拍攝方位確定模型由在預設拍攝方位下拍攝樣本事故車輛得到的圖像資料與所述預設拍攝方位的對應關係訓練得到),從而將所述拍攝方位確定模型的輸出結果作為所述圖像採集設備對所述車輛事故現場的拍攝方位。類似的,上述初始相對位置關係的確定操作,也可由機器學習模型來完成。 在一實施例中,在展示第二引導資訊時,可按照預定義的拍攝流程依次在所述拍攝介面中展示引導用戶將所述圖像採集設備移動至各個標準拍攝方位的第二引導資訊。其中,所述拍攝流程包括針對車輛事故現場中各個拍攝對象的標準拍攝方位,以及拍攝所述各個拍攝對象的順序。 在一實施例中,鑑定結果的參數可包括以下至少之一:碰撞角度、碰撞前的行駛速度、損傷部位、損傷程度。 為了便於理解,下面以手機與伺服器進行交互為例,結合圖式對本說明書的車輛事故的鑑定方案進行詳細說明。 請參見圖3,圖3是一示例性實施例提供的一種車輛事故的鑑定方法的交互圖。如圖3所示,該交互過程可以包括以下步驟: 步驟302,手機拍攝車輛事故現場的圖像資料。 在一實施例中,在發生車輛事故後,用戶(例如,發生車輛事故的司機、交通警察、保險公司的定損員等)可使用手機拍攝車輛事故現場的圖像資料。例如,拍攝發生碰撞的車輛,拍攝車輛具體的損傷部位,拍攝車牌號等。 步驟304,手機在拍攝介面中展示引導資訊。 步驟306,手機被用戶移動至標準位置拍攝圖像資料。 在一實施例中,手機拍攝車輛事故現場得到的圖像資料將被作為鑑定車輛事故的依據(即作為事故鑑定模型的輸入),因此需引導用戶拍攝得到能夠準確反映出車輛事故現場的圖像資料,以提高鑑定車輛事故的準確性。進一步的,可在手機的拍攝介面中展示引導資訊(展示第一引導資訊或第二引導資訊),從而引導用戶拍攝得到正確的圖像資料。 在一實施例中,可預先定義車輛事故現場與圖像採集設備(本實施例以手機為例)之間的標準相對位置關係;換言之,手機在保持與車輛事故現場之間的相對位置關係為標準相對位置關係時,可拍攝得到能夠正確反映出車輛事故現場的圖像資料(可理解為包含車輛事故現場的各個細節)。舉例而言,可定義如下標準相對位置關係:距離車輛正前方3公尺、距離車輛左側4公尺、距離車輛右側4公尺、距離車輛後方3公尺、距離受損部位50公分等。 基於對標準相對位置關係的定義,可在拍攝介面中展示第一引導資訊,從而引導用戶移動手機使得手機與事故車輛之間的相對位置關係符合標準相對位置關係(即移動手機至標準位置)。作為一示例性實施例,手機可根據步驟302拍攝得到的圖像資料(例如,可以是用戶拍攝車輛事故現場得到的首張照片)確定手機與車輛事故現場之間的初始相對位置關係。例如,可藉由相對位置關係確定模型來確定該初始相對位置關係;其中,相對位置關係確定模型可由訓練樣本圖像資料以及拍攝該樣本圖像資料時與被攝對象之間的距離和角度得到(以距離和角度來描述相對位置關係)。又如,還可藉由識別圖像資料中的被攝對象,並提取被攝對象的特徵點以藉由幾何計算來得到手機與被攝對象之間的距離和角度。在確定出初始相對位置關係後,再確定手機的移動狀態,以基於手機的移動狀態和初始相對位置關係確定手機移動後與車輛事故現場的即時相對位置關係。其中,手機的移動狀態可藉由手機的陀螺儀和加速度計等感測器採集到的資料計算得到;在得知手機如何移動後,由於車輛事故現場往往處於靜止狀態,那麼便可根據初始相對位置關係和手機的移動過程,確定出手機移動後與車輛事故現場之間的相對位置關係(即即時相對位置關係)。基於上述對即時相對位置關係的確定,可根據即時相對位置關係和上述標準相對位置關係之間的差異,在手機的拍攝介面中展示第一引導資訊,以引導用戶將手機移動至與標準相對位置關係相匹配的位置。可見,在上述引導的過程中,在確定出初始相對位置關係後,無需再根據手機拍攝到的圖像資料來引導用戶(基於手機的移動狀態即可),即在手機移動的過程中,引導操作可基於手機的移動狀態來完成,而無需依賴於手機在移動時拍攝到的圖像資料。 舉例而言,如圖4A所示,當用戶使用手機拍攝事故車輛41(車輛事故現場中發生碰撞的車輛)的左側時,假定手機與事故車輛41之間的距離為5公尺,而在對應於該拍攝方向(即手機與事故車輛41之間的角度)的標準相對位置關係定義的距離為4公尺;那麼手機可在拍攝介面4中展示引導資訊42“請再靠近1公尺拍攝”,以引導用戶攜帶手機在該拍攝方向上再靠近(事故車輛41)1公尺的距離。 在一實施例中,可預先定義手機對車輛事故現場的標準拍攝方位;換言之,手機在保持處於對車輛事故現場的標準拍攝方位時,可拍攝得到能夠正確反映出車輛事故現場的圖像資料。舉例而言,可定義如下標準拍攝方位(同樣以距離和角度為例):在距離車輛正前方3公尺的位置拍攝、在距離車輛左側4公尺的位置拍攝、在距離車輛右側4公尺的位置拍攝、在距離車輛後方3公尺的位置拍攝、在距離受損部位50公分的位置拍攝等。 基於對標準拍攝方位的定義,可在拍攝介面中展示第二引導資訊,從而引導用戶移動手機使得手機拍攝事故車輛(或受損部位)的拍攝方位符合標準拍攝方位。作為一示例性實施例,可先獲取用戶使用手機對車輛事故現場的拍攝方位(例如,可以是用戶初始使用手機拍攝車輛事故現場時的拍攝方位),再確定該拍攝方位是否符合標準拍攝方位。當該拍攝方位不符合標準拍攝方位時,在拍攝介面中展示第二引導資訊,以引導用戶將手機移動至標準拍攝方位處(即移動手機至標準位置)。 在一實施例中,手機可將步驟302拍攝得到的圖像資料(例如,可以是用戶拍攝車輛事故現場得到的首張照片)輸入拍攝方位確定模型,並將拍攝方位確定模型的輸出結果作為當前手機對車輛事故現場的拍攝方位。其中,拍攝方位確定模型可由在預設拍攝方位(可包含多個不同的拍攝方位)下拍攝樣本事故車輛得到的圖像資料與該預設拍攝方位的對應關係訓練得到。而在展示第二引導資訊時,可按照預定義的拍攝流程依次在拍攝介面中展示引導用戶將手機移動至各個標準拍攝方位的第二引導資訊。其中,拍攝流程包括針對車輛事故現場中各個拍攝對象的標準拍攝方位,以及拍攝各個拍攝對象的順序。 舉例而言,如圖4B所示,假定拍攝流程包括依次在距離車輛左側4公尺的位置拍攝事故車輛,以及在距離車輛右側4公尺的位置拍攝事故車輛。那麼,當用戶在距離車輛左側4公尺的位置拍攝完事故車輛41後,可在拍攝介面中展示引導資訊43“請距離4公尺拍攝事故車輛的右側”以及“指向事故車輛41右側的箭頭”,以引導用戶攜帶手機在事故車輛41的右側4公尺處拍攝。 步驟308,手機向伺服器發送在標準位置拍攝的圖像資料。 步驟310,伺服器將接收到的圖像資料輸入事故鑑定模型。 在一實施例中,可預先收集歷史車輛事故現場的圖像資料,並利用可靠途徑分析該歷史車輛事故現場的圖像資料得來的事故鑑定資訊(例如,由定損員人工分析該圖像資料得到的事故鑑定資訊)對該圖像資料進行標注,從而將標注後的圖像資料作為樣本資料訓練機器學習模型,以得到事故鑑定模型。其中,事故鑑定資訊的參數可以包括碰撞角度、碰撞前的行駛速度、損傷部位、損傷程度等;可採用邏輯回歸、決策樹、神經網路、支持向量機等演算法訓練樣本資料來得到事故鑑定模型。當然,本說明書一個或多個實施例並不對事故鑑定資訊的參數,以及訓練事故鑑定模型採用的演算法進行限制。藉由上述採用機器學習模型的方式來鑑定車輛事故,使得用戶可直接利用車輛事故現場的照片、影片便可進行端到端的車輛事故鑑定,可有效提高鑑定效率,縮短鑑定週期。同時,本說明書的車輛事故的鑑定方案支持遠端鑑定和自動鑑定,從而大幅降低了車輛事故的鑑定成本。例如,在發生車輛事故後,司機只需要藉由客戶端採集車輛事故現場的圖像資料,基於本說明書的車輛事故的鑑定方案便可得到鑑定結果,而無需定損員到車輛事故現場來勘察,司機和交通警察也可儘快地處理車輛事故。 舉例而言,可收集一批歷史車輛事故的案例,並獲取該案例中發生碰撞的車輛部件、車輛在發生碰撞時與碰撞對象的相對速度(以下簡稱為碰撞速度)、發生碰撞處的照片等資料。基於獲取到的資料,可針對每一個碰撞部件均構建一組以照片為輸入,碰撞速度為標注值的樣本資料,並對碰撞速度進行取整。可選的,可按照一定的精度劃分碰撞速度的取值範圍。例如,取值範圍為10km/h~200km/h,精度為1km/h;那麼,可將碰撞速度劃分為範圍從10km/h到200km/h的191個速度區段。基於上述對碰撞速度的劃分方式,可將碰撞速度的預測定義為一個分類問題。換言之,藉由將一組車輛事故的照片輸入事故鑑定模型,事故鑑定模型可預測出該車輛事故的碰撞速度所屬的速度區段。 而針對訓練過程,可採用CNN(Convolutional Neural Networks,卷積神經網路)來訓練樣本資料以得到事故鑑定模型。如圖4C所示,CNN可包括卷積層、池化層和全連接層。其中,卷積層用於對輸入的照片進行計算以提取出特徵向量;池化層通常位於卷積層之後,一方面降低特徵向量的維度以簡化網路計算複雜度,另一方面藉由池化來降低卷積層輸出的特徵向量,避免卷積神經網路出現過擬合;全連接層用於將網路學習到的特徵向量映射到樣本的標記空間中,比如將池化層輸出的二維特徵向量轉化成一維向量。由於車輛事故照片的數量不定,同時每張照片所包含的視覺特徵在時序維度上存在關聯,因而可將上述樣本資料(針對同一車輛事故,標注有碰撞速度的一組車輛事故照片)作為輸入以對神經網路進行訓練。例如,利用CNN來提取每張照片的視覺特徵向量,再將其輸入至LSTM(Long Short-Term Memory,長短期記憶網路),以由LSTM來處理所有照片(圖中所示為4張照片分別輸入CNN)的視覺特徵向量,從而生成最終的分類向量來代表針對各個可能的碰撞速度的預測概率。 步驟312,伺服器將事故鑑定模型的輸出結果返回至手機。 在一實施例中,也可將事故鑑定模型配置於手機側;換言之,在手機在標準位置拍攝得到圖像資料後,直接將拍攝得到的圖像資料輸入事故鑑定模型以獲取事故鑑定結果(即事故鑑定模型輸出的事故鑑定資訊),而無需將拍攝到的圖像資料發送至伺服器。進一步的,伺服器可定期更新樣本資料以重新訓練事故鑑定模型,從而提高鑑定的準確性。而當事故鑑定模型配置於手機側時,伺服器可定期向手機發送更新後的事故鑑定模型。 步驟314,手機將接收到的輸出結果作為針對當前車輛事故現場的鑑定結果展示。 在一實施例中,承接於上述舉例,事故鑑定模型的輸出為針對當前車輛事故可能存在的各個碰撞速度的概率。例如,可將輸出結果中概率最高的碰撞速度作為鑑定結果,也可以將輸出結果中概率最高且超過預設概率閾值的碰撞速度作為鑑定結果。 舉例而言,假定輸出結果如表1所示:

Figure 02_image001
在一種情況下,可將輸出結果中概率最高的碰撞速度110km/h作為鑑定結果。在另一種情況下,假定預設概率閾值為75%,那麼由於概率最高的碰撞速度110km/h的概率超過了概率閾值75%,可將110km/h作為鑑定結果。圖5是一示例性實施例提供的一種設備的示意結構圖。請參考圖5,在硬體層面,該設備包括處理器502、內部匯流排504、網路介面506、內部儲存器508以及非易失性儲存器510,當然還可能包括其他業務所需要的硬體。處理器502從非易失性儲存器510中讀取對應的電腦程式到內部儲存器508中然後運行,在邏輯層面上形成車輛事故的鑑定裝置。當然,除了軟體實現方式之外,本說明書一個或多個實施例並不排除其他實現方式,比如邏輯裝置抑或軟硬體結合的方式等等,也就是說以下處理流程的執行主體並不限定於各個邏輯單元,也可以是硬體或邏輯裝置。 請參考圖6,在軟體實施方式中,該車輛事故的鑑定裝置可以包括: 圖像獲取單元61,獲取車輛事故現場的圖像資料; 結果確定單元62,確定鑑定結果,所述鑑定結果是藉由將所述圖像資料輸入事故鑑定模型而得到的輸出結果得到;所述事故鑑定模型由歷史車輛事故現場的圖像資料,以及所述歷史車輛事故現場的事故鑑定資訊訓練得到。 可選的,所述結果確定單元62具體用於: 將所述圖像資料輸入所述事故鑑定模型,以將所述事故鑑定模型的輸出結果作為所述鑑定結果; 或者,將所述圖像資料發送至伺服端以使所述伺服端將所述圖像資料輸入所述事故鑑定模型,以及將所述伺服端返回的輸出結果作為所述鑑定結果。 可選的,還包括: 初始位置確定單元63,根據所述圖像資料確定所述車輛事故現場與圖像採集設備之間的初始相對位置關係; 移動狀態確定單元64,確定所述圖像採集設備的移動狀態; 即時位置確定單元65,基於所述移動狀態和所述初始相對位置關係,確定所述圖像採集設備移動後與所述車輛事故現場的即時相對位置關係; 第一展示單元66,根據所述即時相對位置關係,在所述圖像採集設備的拍攝介面中展示第一引導資訊,以引導用戶將所述圖像採集設備移動至與標準相對位置關係相匹配的位置。 可選的,還包括: 方位獲取單元67,獲取圖像採集設備對所述車輛事故現場的拍攝方位; 方位確定單元68,確定所述拍攝方位是否符合標準拍攝方位; 第二展示單元69,當所述拍攝方位不符合標準拍攝方位時,在所述圖像採集設備的拍攝介面中展示第二引導資訊,以引導用戶將所述圖像採集設備移動至所述標準拍攝方位處。 可選的,所述方位獲取單元67具體用於: 獲取所述圖像採集設備拍攝所述車輛事故現場得到的即時圖像資料; 將所述即時圖像資料輸入拍攝方位確定模型,所述拍攝方位確定模型由在預設拍攝方位下拍攝樣本事故車輛得到的圖像資料與所述預設拍攝方位的對應關係訓練得到; 將所述拍攝方位確定模型的輸出結果作為所述圖像採集設備對所述車輛事故現場的拍攝方位。 可選的,所述第二展示單元69具體用於: 按照預定義的拍攝流程依次在所述拍攝介面中展示引導用戶將所述圖像採集設備移動至各個標準拍攝方位的第二引導資訊;所述拍攝流程包括針對車輛事故現場中各個拍攝對象的標準拍攝方位,以及拍攝所述各個拍攝對象的順序。 可選的,所述鑑定結果的參數包括以下至少之一:碰撞角度、碰撞前的行駛速度、損傷部位、損傷程度。 上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦,電腦的具體形式可以是個人電腦、筆記型電腦、行動電話、相機電話、智慧型電話、個人數位助理、媒體播放器、導航設備、電子郵件收發設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任意幾種設備的組合。 在一個典型的配置中,電腦包括一個或多個處理器(CPU)、輸入/輸出介面、網路介面和內部儲存器。 內部儲存器可能包括電腦可讀媒體中的非永久性儲存器,隨機存取記憶體(RAM)和/或非易失性內部儲存器等形式,如唯讀記憶體(ROM)或快閃記憶體(flash RAM)。內部儲存器是電腦可讀媒體的示例。 電腦可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可抹除可程式化唯讀記憶體(EEPROM)、快閃記憶體或其他內部儲存器技術、唯讀光碟(CD-ROM)、數位化多功能光碟(DVD)或其他光學儲存、磁盒式磁帶、磁碟儲存、量子儲存器、基於石墨烯的儲存媒體或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被計算設備存取的資訊。按照本文中的界定,電腦可讀媒體不包括暫存電腦可讀媒體(transitory media),如調變的資料信號和載波。 還需要說明的是,術語“包括”、“包含”或者其任何其他變體意在涵蓋非排他性的包含,從而使得包括一系列要素的過程、方法、商品或者設備不僅包括那些要素,而且還包括沒有明確列出的其他要素,或者是還包括為這種過程、方法、商品或者設備所固有的要素。在沒有更多限制的情況下,由語句“包括一個……”限定的要素,並不排除在包括所述要素的過程、方法、商品或者設備中還存在另外的相同要素。 上述對本說明書特定實施例進行了描述。其它實施例在所附申請專利範圍的範圍內。在一些情況下,在申請專利範圍中記載的動作或步驟可以按照不同於實施例中的順序來執行並且仍然可以實現期望的結果。另外,在圖式中描繪的過程不一定要求示出的特定順序或者連續順序才能實現期望的結果。在某些實施方式中,多任務處理和並行處理也是可以的或者可能是有利的。 在本說明書一個或多個實施例使用的術語是僅僅出於描述特定實施例的目的,而非旨在限制本說明書一個或多個實施例。在本說明書一個或多個實施例和所附申請專利範圍中所使用的單數形式的“一種”、“所述”和“該”也旨在包括多數形式,除非上下文清楚地表示其他含義。還應當理解,本文中使用的術語“和/或”是指並包含一個或多個相關聯的列出項目的任何或所有可能組合。 應當理解,儘管在本說明書一個或多個實施例可能採用術語第一、第二、第三等來描述各種資訊,但這些資訊不應限於這些術語。這些術語僅用來將同一類型的資訊彼此區分開。例如,在不脫離本說明書一個或多個實施例範圍的情況下,第一資訊也可以被稱為第二資訊,類似地,第二資訊也可以被稱為第一資訊。取決於語境,如在此所使用的詞語“如果”可以被解釋成為“在……時”或“當……時”或“響應於確定”。 以上所述僅為本說明書一個或多個實施例的較佳實施例而已,並不用以限制本說明書一個或多個實施例,凡在本說明書一個或多個實施例的精神和原則之內,所做的任何修改、等同替換、改進等,均應包含在本說明書一個或多個實施例保護的範圍之內。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the drawings. When the following description refers to the drawings, the same numerals in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with one or more embodiments of this specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of one or more embodiments of the present specification as detailed in the appended claims. It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. In addition, a single step described in this specification may be decomposed into multiple steps for description in other embodiments; and multiple steps described in this specification may also be combined into a single step in other embodiments. describe. FIG. 1 is a schematic structural diagram of a vehicle accident identification system provided by an exemplary embodiment. As shown in FIG. 1 , the system may include a server 11 , a network 12 , and several image acquisition devices, such as a mobile phone 13 , a mobile phone 14 , a driving recorder 15 , a driving recorder 16 , and the like. The server 11 may be a physical server including an independent host, or the server 11 may be a virtual server hosted by a cluster of hosts. During the running process, the server 11 may run a program on the server side of a certain application, so as to realize the relevant business functions of the application. In the technical solution of one or more embodiments of this specification, the server 11 can be used as a server to cooperate with the clients running on the mobile phones 13-14 and the driving recorders 15-16 to realize the identification solution of vehicle accidents. The mobile phone 13-14 and the driving recorder 15-16 are just one type of image acquisition devices that the user can use. In fact, users can obviously also use image capture devices such as: tablet devices, notebook computers, Personal Digital Assistants (PDAs), wearable devices (such as smart glasses, smart watches, etc.) etc., one or more embodiments of this specification are not limited thereto. During the running process, the image capture device can run a program on the client side of an application to realize the relevant business functions of the application. For example, the image capture device can act as a client to interact with the server 11 to achieve this The identification scheme for vehicle accidents in the manual. The network 12 for interaction between the mobile phones 13-14, the driving recorders 15-16 and the server 11 may include various types of wired or wireless networks. In one embodiment, the network 12 may include the Public Switched Telephone Network (PSTN) and the Internet. The following describes the identification scheme of vehicle accidents in this specification for different roles in the client and the server respectively. Please refer to FIG. 2 , which is a flowchart of a method for identifying a vehicle accident provided by an exemplary embodiment. As shown in FIG. 2 , the method is applied to the client and may include the following steps: Step 202 , acquiring image data of the vehicle accident scene. Step 204, determine the identification result, the identification result is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is obtained from the image data of the historical vehicle accident scene, and the historical The accident identification information training at the vehicle accident scene is obtained. In one embodiment, after a vehicle accident occurs, a user (for example, a driver in a vehicle accident, a traffic policeman, a loss assessor of an insurance company, etc.) can use a client (an image capture device equipped with a camera module, which can Communicate with the server; such as mobile phones, driving recorders, etc.) to capture image data (such as photos, videos, etc.) of the vehicle accident scene, so that the captured image data can be used as the input of the accident identification model, so that the accident identification The model outputs identification results. By using the above-mentioned method of using machine learning model to identify vehicle accidents, users can directly use the photos and videos of the vehicle accident scene to conduct end-to-end vehicle accident identification, which can effectively improve the identification efficiency and shorten the identification cycle. At the same time, the identification scheme of vehicle accidents in this specification supports remote identification and automatic identification, thereby greatly reducing the identification cost of vehicle accidents. For example, after a vehicle accident occurs, the driver only needs to collect the image data of the vehicle accident scene through the client, and the identification result can be obtained based on the vehicle accident identification scheme of this manual, without the need for the damage assessor to go to the vehicle accident scene to investigate , drivers and traffic police can also deal with vehicle accidents as soon as possible. In one embodiment, the accident identification model can be configured on the client side, and then the client can directly input the image data into the accident identification model, so as to use the output result of the accident identification model as the identification result. In one embodiment, the accident identification model can be configured on the server side, then the client can send the image data to the server so that the server can input the image data into the accident identification model, and The output result returned by the server is used as the identification result. In one embodiment, the image data of the vehicle accident scene is the basis for identifying the vehicle accident (ie, the input of the accident identification model), and the image data needs to be captured by the user using the client. Therefore, it is necessary to guide the user to obtain image data that can accurately reflect the scene of the vehicle accident. Further, the guide information can be displayed in the photographing interface of the image acquisition device (ie, the client), so as to guide the user to obtain correct image data by photographing. In one case, the standard relative positional relationship between the vehicle accident scene and the image acquisition device may be pre-defined; in other words, when the image acquisition device maintains the standard relative positional relationship with the vehicle accident scene, it can The image data that can correctly reflect the scene of the vehicle accident (which can be understood as including the details of the scene of the vehicle accident) are obtained by shooting. Therefore, the user can be guided to move the image capture device according to the relative positional relationship between the vehicle accident site and the image capture device. As an exemplary embodiment, the vehicle accident may be determined first according to image data (image data of the vehicle accident scene acquired by the image acquisition device; for example, it may be the first photo obtained by the user initially taking the vehicle accident scene) The initial relative positional relationship between the scene and the image acquisition device, and then determine the movement state of the image acquisition device, so as to determine, based on the movement state and the initial relative positional relationship, that the image acquisition device is moved after the image acquisition device is moved. The real-time relative position relationship of the vehicle accident scene; then, according to the real-time relative position relationship, the first guide information can be displayed in the shooting interface of the image capture device to guide the user to move the image capture device to a position that matches the standard relative positional relationship. It can be seen that after the initial relative positional relationship is determined, there is no need to guide the user according to the image data captured by the image capture device (only based on the movement state of the image capture device), that is, during the movement process, the guiding operation can be based on The moving state of the image acquisition device is accomplished without relying on the image data captured by the image acquisition device when it is moving. In another case, the standard shooting orientation of the image acquisition device for the vehicle accident scene can be pre-defined; Images of the scene. Therefore, the user can be guided to move the image capturing device according to the standard photographing orientation of the image capturing device to the vehicle accident scene. As an exemplary embodiment, the shooting orientation of the vehicle accident scene by the image acquisition device may be obtained first (for example, the shooting orientation when the user initially uses the image acquisition device to shoot the vehicle accident scene), and then it is determined whether the shooting orientation is Comply with the standard shooting orientation; when the shooting orientation does not conform to the standard shooting orientation, second guidance information is displayed in the shooting interface of the image capturing device to guide the user to move the image capturing device to the standard shooting orientation azimuth. In one embodiment, the operation of obtaining the photographing orientation of the image acquisition device at the vehicle accident site (eg, including parameters such as distance and angle between the image acquisition device and the vehicle accident site) can be accomplished by using a machine learning model. For example, the real-time image data obtained by the image acquisition device shooting the vehicle accident scene can be obtained, and then the real-time image data can be input into the shooting orientation determination model (the shooting orientation determination model is determined by the preset shooting orientation The corresponding relationship between the image data obtained by shooting the sample accident vehicle and the preset shooting orientation is obtained by training), so that the output result of the shooting orientation determination model is used as the shooting orientation of the vehicle accident scene by the image acquisition device. . Similarly, the above-mentioned determination of the initial relative position relationship can also be completed by a machine learning model. In one embodiment, when displaying the second guide information, the second guide information for guiding the user to move the image capture device to each standard shooting position may be displayed in the shooting interface in sequence according to a predefined shooting process. Wherein, the photographing process includes standard photographing orientations for each photographed object in the vehicle accident scene, and a sequence of photographing the various photographed objects. In one embodiment, the parameters of the identification result may include at least one of the following: a collision angle, a driving speed before the collision, a damage location, and a damage degree. In order to facilitate understanding, the following takes the interaction between the mobile phone and the server as an example, and the identification scheme of the vehicle accident in this specification will be described in detail in combination with the drawings. Please refer to FIG. 3 , which is an interaction diagram of a method for identifying a vehicle accident provided by an exemplary embodiment. As shown in FIG. 3 , the interaction process may include the following steps: Step 302 , the mobile phone captures the image data of the vehicle accident scene. In one embodiment, after a vehicle accident occurs, a user (eg, a driver in the vehicle accident, a traffic policeman, a loss assessor of an insurance company, etc.) can use a mobile phone to capture image data of the vehicle accident scene. For example, photographing vehicles that have collided, photographing specific damaged parts of vehicles, photographing license plate numbers, etc. Step 304, the mobile phone displays the guide information in the photographing interface. Step 306, the mobile phone is moved by the user to a standard position to capture image data. In one embodiment, the image data obtained by photographing the vehicle accident scene with the mobile phone will be used as the basis for identifying the vehicle accident (that is, as the input of the accident identification model), so it is necessary to guide the user to capture an image that can accurately reflect the vehicle accident scene. information to improve the accuracy of identifying vehicle accidents. Further, the guide information (displaying the first guide information or the second guide information) can be displayed in the photographing interface of the mobile phone, so as to guide the user to obtain correct image data by shooting. In one embodiment, the standard relative positional relationship between the vehicle accident site and the image acquisition device (a mobile phone is taken as an example in this embodiment) can be pre-defined; in other words, the relative positional relationship between the mobile phone and the vehicle accident site is: When the standard relative position relationship is used, the image data that can correctly reflect the vehicle accident scene can be obtained by shooting (it can be understood as including various details of the vehicle accident scene). For example, the following standard relative position relationships can be defined: 3 meters from the front of the vehicle, 4 meters from the left side of the vehicle, 4 meters from the right side of the vehicle, 3 meters from the rear of the vehicle, 50 cm from the damaged part, etc. Based on the definition of the standard relative positional relationship, the first guide information can be displayed in the photographing interface to guide the user to move the mobile phone so that the relative positional relationship between the mobile phone and the accident vehicle conforms to the standard relative positional relationship (ie, move the mobile phone to a standard position). As an exemplary embodiment, the mobile phone may determine the initial relative positional relationship between the mobile phone and the vehicle accident scene according to the image data obtained in step 302 (for example, the first photo obtained by the user taking the vehicle accident scene). For example, the initial relative positional relationship can be determined by a relative positional relationship determination model; wherein, the relative positional relationship determination model can be obtained from the training sample image data and the distance and angle between the sample image data and the subject when shooting the sample image data (The relative position relationship is described by distance and angle). For another example, the distance and angle between the mobile phone and the subject can be obtained by geometric calculation by recognizing the subject in the image data and extracting the feature points of the subject. After the initial relative position relationship is determined, the mobile state of the mobile phone is then determined to determine the immediate relative position relationship between the mobile phone and the vehicle accident scene after the mobile phone moves based on the mobile state of the mobile phone and the initial relative position relationship. Among them, the mobile state of the mobile phone can be calculated from the data collected by sensors such as the gyroscope and accelerometer of the mobile phone. The positional relationship and the movement process of the mobile phone determine the relative positional relationship between the mobile phone and the vehicle accident scene after the mobile phone is moved (ie, the immediate relative positional relationship). Based on the above determination of the real-time relative positional relationship, the first guide information can be displayed on the camera interface of the mobile phone according to the difference between the real-time relative positional relationship and the above-mentioned standard relative positional relationship, so as to guide the user to move the mobile phone to a relative position relative to the standard relative positional relationship. The relationship matches the location. It can be seen that in the above guidance process, after the initial relative position relationship is determined, there is no need to guide the user according to the image data captured by the mobile phone (based on the mobile state of the mobile phone), that is, in the process of the mobile phone moving, guide the user. The operation can be completed based on the mobile state of the mobile phone without relying on the image data captured by the mobile phone when it is moving. For example, as shown in FIG. 4A , when the user uses a mobile phone to photograph the left side of the accident vehicle 41 (the vehicle that collided at the scene of the vehicle accident), it is assumed that the distance between the mobile phone and the accident vehicle 41 is 5 meters. The distance defined by the standard relative positional relationship in the shooting direction (that is, the angle between the mobile phone and the accident vehicle 41 ) is 4 meters; then the mobile phone can display the guide information 42 in the shooting interface 4 “Please move closer to 1 meter to shoot” , so as to guide the user to bring the mobile phone to approach (the accident vehicle 41 ) a distance of 1 meter in the shooting direction. In one embodiment, the standard photographing orientation of the mobile phone at the vehicle accident scene can be pre-defined; in other words, the mobile phone can capture image data that can correctly reflect the vehicle accident scene while maintaining the standard photographing orientation at the vehicle accident scene. For example, the following standard shooting orientations can be defined (also taking distance and angle as examples): shooting at a position 3 meters in front of the vehicle, shooting at a position 4 meters away from the left side of the vehicle, and shooting at a position 4 meters away from the right side of the vehicle , 3 meters from the rear of the vehicle, 50 cm from the damaged area, etc. Based on the definition of the standard shooting position, the second guide information can be displayed in the shooting interface, so as to guide the user to move the mobile phone so that the shooting position of the mobile phone to shoot the accident vehicle (or the damaged part) conforms to the standard shooting position. As an exemplary embodiment, the shooting orientation of the user at the vehicle accident scene using the mobile phone may be obtained first (for example, the shooting orientation when the user initially uses the mobile phone to shoot the vehicle accident scene), and then it is determined whether the shooting orientation conforms to the standard shooting orientation. When the shooting orientation does not conform to the standard shooting orientation, second guidance information is displayed in the shooting interface to guide the user to move the mobile phone to the standard shooting orientation (ie, move the mobile phone to a standard position). In one embodiment, the mobile phone can input the image data obtained in step 302 (for example, it can be the first photo obtained by the user taking the vehicle accident scene) into the shooting orientation determination model, and use the output result of the shooting orientation determination model as the current image data. The mobile phone's shooting position of the vehicle accident scene. Wherein, the shooting orientation determination model can be obtained by training the corresponding relationship between image data obtained by shooting a sample accident vehicle in a preset shooting orientation (which may include multiple different shooting orientations) and the preset shooting orientation. When displaying the second guide information, the second guide information for guiding the user to move the mobile phone to each standard shooting position may be displayed in the shooting interface in sequence according to the predefined shooting process. The photographing process includes standard photographing orientations for each photographed object in the vehicle accident scene, and the sequence of photographing each photographed object. For example, as shown in FIG. 4B , it is assumed that the photographing process includes photographing the accident vehicle at a position 4 meters away from the left side of the vehicle and photographing the accident vehicle at a position 4 meters away from the right side of the vehicle in sequence. Then, after the user has photographed the accident vehicle 41 at a distance of 4 meters from the left side of the vehicle, the guide information 43 "Please take a picture of the right side of the accident vehicle 41 from a distance of 4 meters" and "The arrow pointing to the right side of the accident vehicle 41" can be displayed in the shooting interface. ” to guide the user to take a picture with a mobile phone 4 meters to the right of the accident vehicle 41 . Step 308, the mobile phone sends the image data captured at the standard position to the server. Step 310, the server inputs the received image data into the accident identification model. In one embodiment, the image data of the historical vehicle accident scene can be collected in advance, and the accident identification information obtained by analyzing the image data of the historical vehicle accident scene in a reliable way (for example, the image is manually analyzed by a loss assessor). The accident identification information obtained from the data) is marked on the image data, so that the marked image data is used as the sample data to train the machine learning model to obtain the accident identification model. Among them, the parameters of the accident identification information can include the collision angle, the driving speed before the collision, the damage location, the degree of damage, etc.; the sample data can be trained by algorithms such as logistic regression, decision tree, neural network, support vector machine, etc. to obtain accident identification Model. Certainly, one or more embodiments of the present specification do not limit the parameters of the accident identification information and the algorithm used to train the accident identification model. By using the above-mentioned method of using machine learning model to identify vehicle accidents, users can directly use the photos and videos of the vehicle accident scene to conduct end-to-end vehicle accident identification, which can effectively improve the identification efficiency and shorten the identification cycle. At the same time, the identification scheme of vehicle accidents in this specification supports remote identification and automatic identification, thereby greatly reducing the identification cost of vehicle accidents. For example, after a vehicle accident occurs, the driver only needs to collect the image data of the vehicle accident scene through the client, and the identification result can be obtained based on the vehicle accident identification scheme of this manual, without the need for the damage assessor to go to the vehicle accident scene to investigate , drivers and traffic police can also deal with vehicle accidents as soon as possible. For example, a batch of historical vehicle accident cases can be collected, and the vehicle parts that collided in the case, the relative speed of the vehicle and the collision object at the time of the collision (hereinafter referred to as the collision speed), the photos of the collision place, etc. material. Based on the obtained data, a set of sample data with photos as input and collision speed as the marked value can be constructed for each collision component, and the collision speed is rounded up. Optionally, the value range of the collision speed may be divided according to a certain precision. For example, the value range is 10km/h~200km/h, and the accuracy is 1km/h; then, the collision speed can be divided into 191 speed segments ranging from 10km/h to 200km/h. Based on the above classification of collision velocity, the prediction of collision velocity can be defined as a classification problem. In other words, by feeding a set of photos of vehicle accidents into the accident identification model, the accident identification model can predict the speed segment to which the collision speed of the vehicle accident belongs. For the training process, CNN (Convolutional Neural Networks, convolutional neural network) can be used to train the sample data to obtain an accident identification model. As shown in Figure 4C, a CNN can include convolutional layers, pooling layers, and fully connected layers. Among them, the convolution layer is used to calculate the input photo to extract the feature vector; the pooling layer is usually located after the convolution layer. On the one hand, the dimension of the feature vector is reduced to simplify the computational complexity of the network. Reduce the feature vector output by the convolutional layer to avoid overfitting of the convolutional neural network; the fully connected layer is used to map the feature vector learned by the network to the label space of the sample, such as the two-dimensional feature output by the pooling layer. The vector is converted into a one-dimensional vector. Since the number of vehicle accident photos is uncertain, and the visual features contained in each photo are related in the time series dimension, the above sample data (for the same vehicle accident, a set of vehicle accident photos marked with collision speed) can be used as input to Train the neural network. For example, CNN is used to extract the visual feature vector of each photo, and then it is input into LSTM (Long Short-Term Memory, long short-term memory network), so that all photos are processed by LSTM (4 photos are shown in the figure) Input the visual feature vector of CNN) to generate the final classification vector to represent the predicted probability for each possible collision speed. Step 312, the server returns the output result of the accident identification model to the mobile phone. In one embodiment, the accident identification model can also be configured on the mobile phone side; in other words, after the mobile phone captures the image data at a standard position, directly input the captured image data into the accident identification model to obtain the accident identification result (i.e. The accident identification information output by the accident identification model) without sending the captured image data to the server. Further, the server can periodically update the sample data to retrain the accident identification model, thereby improving the identification accuracy. When the accident identification model is configured on the mobile phone side, the server can periodically send the updated accident identification model to the mobile phone. Step 314, the mobile phone displays the received output result as the identification result for the current vehicle accident scene. In one embodiment, following the above example, the output of the accident identification model is the probability of each collision speed that may exist for the current vehicle accident. For example, the collision speed with the highest probability in the output result may be used as the identification result, or the collision speed with the highest probability in the output result and exceeding the preset probability threshold may be used as the identification result. As an example, assume the output is shown in Table 1:
Figure 02_image001
In one case, the collision speed with the highest probability of 110 km/h in the output results can be used as the identification result. In another case, assuming that the preset probability threshold is 75%, since the probability of the collision speed with the highest probability of 110km/h exceeds the probability threshold of 75%, 110km/h can be used as the identification result. Fig. 5 is a schematic structural diagram of a device provided by an exemplary embodiment. Please refer to FIG. 5 , at the hardware level, the device includes a processor 502, an internal bus 504, a network interface 506, an internal storage 508, and a non-volatile storage 510, and of course, it may also include hardware required for other services. body. The processor 502 reads the corresponding computer program from the non-volatile storage 510 into the internal storage 508 and then executes it, forming a vehicle accident identification device on a logical level. Of course, in addition to the software implementation, one or more embodiments of this specification do not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution subject of the following processing procedures is not limited to Each logic unit can also be a hardware or a logic device. Referring to FIG. 6 , in a software implementation, the vehicle accident identification device may include: an image acquisition unit 61 , which acquires image data of the vehicle accident scene; a result determination unit 62 , which determines an identification result, which is based on The output result is obtained by inputting the image data into the accident identification model; the accident identification model is obtained by training the image data of the historical vehicle accident scene and the accident identification information of the historical vehicle accident scene. Optionally, the result determining unit 62 is specifically configured to: input the image data into the accident identification model, so as to use the output result of the accident identification model as the identification result; The data is sent to the server, so that the server inputs the image data into the accident identification model, and uses the output result returned by the server as the identification result. Optionally, it further includes: an initial position determination unit 63, which determines the initial relative position relationship between the vehicle accident scene and the image acquisition device according to the image data; a movement state determination unit 64, which determines the image acquisition device The moving state of the device; the instant position determination unit 65 , based on the moving state and the initial relative positional relationship, to determine the immediate relative positional relationship between the image acquisition device and the vehicle accident scene after moving; the first display unit 66 , according to the real-time relative position relationship, display first guide information in the shooting interface of the image capture device to guide the user to move the image capture device to a position matching the standard relative position relationship. Optionally, it also includes: an orientation acquiring unit 67, which acquires the shooting orientation of the vehicle accident site by the image acquisition device; an orientation determining unit 68, which determines whether the shooting orientation conforms to the standard shooting orientation; the second displaying unit 69, when When the shooting orientation does not conform to the standard shooting orientation, second guidance information is displayed in the shooting interface of the image capturing device to guide the user to move the image capturing device to the standard shooting orientation. Optionally, the orientation acquiring unit 67 is specifically configured to: acquire real-time image data obtained by shooting the vehicle accident scene by the image acquisition device; input the real-time image data into a shooting orientation determination model, and the shooting The orientation determination model is obtained by training the corresponding relationship between the image data obtained by shooting the sample accident vehicle under the preset shooting orientation and the preset shooting orientation; the output result of the shooting orientation determination model is used as the image acquisition device pair. The shooting orientation of the vehicle accident scene. Optionally, the second display unit 69 is specifically configured to: sequentially display, in the shooting interface, second guidance information that guides the user to move the image capture device to each standard shooting orientation in the shooting interface according to a predefined shooting process; The photographing process includes standard photographing orientations for each photographed object in the vehicle accident scene, and a sequence of photographing the various photographed objects. Optionally, the parameters of the identification result include at least one of the following: collision angle, driving speed before collision, damage location, and damage degree. The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementation device is a computer. The specific form of the computer can be a personal computer, a notebook computer, a mobile phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game console. , tablets, wearables, or a combination of any of these devices. In a typical configuration, a computer includes one or more processors (CPUs), an input/output interface, a network interface, and internal memory. Internal storage may include non-persistent storage in computer readable media, random access memory (RAM) and/or non-volatile internal storage in the form of read only memory (ROM) or flash memory body (flash RAM). Internal storage is an example of a computer-readable medium. Computer-readable media includes both permanent and non-permanent, removable and non-removable media, and can be implemented by any method or technology for storage of information. Information can be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM) , Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other internal storage technology, Compact Disc Read Only (CD-ROM), Digital Versatile Disc ( DVD) or other optical storage, magnetic cassette tape, magnetic disk storage, quantum storage, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves. It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or which are inherent to such a process, method, article of manufacture, or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture, or device that includes the element. The foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. Additionally, the processes depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous. The terminology used in one or more embodiments of this specification is for the purpose of describing a particular embodiment only and is not intended to limit the one or more embodiments of this specification. As used in this specification and in one or more embodiments and the appended claims, the singular forms "a,""the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, such information should not be limited by these terms. These terms are only used to distinguish information of the same type from one another. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of one or more embodiments of the present specification. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining." The above descriptions are only preferred embodiments of one or more embodiments of this specification, and are not intended to limit one or more embodiments of this specification. All within the spirit and principles of one or more embodiments of this specification, Any modifications, equivalent replacements, improvements, etc. made should be included within the protection scope of one or more embodiments of this specification.

11:伺服器 12:網路 13:手機 14:手機 15:行車記錄器 16:行車記錄器 202~204:步驟 302~314:步驟 4:拍攝介面 41:事故車輛 42:引導資訊 43:引導資訊 502:處理器 504:內部匯流排 506:網路介面 508:內部儲存器 510:非易失性儲存器 61:圖像獲取單元 62:結果確定單元 63:初始位置確定單元 64:移動狀態確定單元 65:即時位置確定單元 66:第一展示單元 67:方位獲取單元 68:方位確定單元 69:第二展示單元11: Server 12: Internet 13: Cell Phone 14: Cell Phone 15: Driving recorder 16: Driving recorder 202~204: Steps 302~314: Steps 4: Shooting interface 41: Accident Vehicle 42: Guide information 43: Guide information 502: Processor 504: Internal busbar 506: Network Interface 508: Internal memory 510: Non-volatile memory 61: Image acquisition unit 62: Result determination unit 63: Initial position determination unit 64: Mobile state determination unit 65: Instant Position Determination Unit 66: The first display unit 67: Orientation acquisition unit 68: Orientation determination unit 69: Second Display Unit

[圖1]是一示例性實施例提供的一種車輛事故的鑑定系統的架構示意圖。 [圖2]是一示例性實施例提供的一種車輛事故的鑑定方法的流程圖。 [圖3]是一示例性實施例提供的一種車輛事故的鑑定方法的交互圖。 [圖4A]是一示例性實施例提供的一種展示引導資訊的示意圖。 [圖4B]是一示例性實施例提供的另一種展示引導資訊的示意圖。 [圖4C]是一示例性實施例提供的訓練事故鑑定模型的示意圖。 [圖5]是一示例性實施例提供的一種設備的結構示意圖。 [圖6]是一示例性實施例提供的一種車輛事故的鑑定裝置的方塊圖。[Fig. 1] is a schematic diagram of the architecture of a vehicle accident identification system provided by an exemplary embodiment. [ Fig. 2 ] is a flowchart of a method for identifying a vehicle accident provided by an exemplary embodiment. [ Fig. 3 ] is an interaction diagram of a method for identifying a vehicle accident provided by an exemplary embodiment. [FIG. 4A] is a schematic diagram of displaying guidance information provided by an exemplary embodiment. [FIG. 4B] is another schematic diagram of displaying guidance information provided by an exemplary embodiment. [ FIG. 4C ] is a schematic diagram of training an accident identification model provided by an exemplary embodiment. [Fig. 5] is a schematic structural diagram of a device provided by an exemplary embodiment. [ Fig. 6 ] is a block diagram of a vehicle accident identification device provided by an exemplary embodiment.

Claims (13)

一種車輛事故的鑑定方法,包括:獲取車輛事故現場的圖像資料,並根據該圖像資料確定該車輛事故現場與圖像採集設備之間的初始相對位置關係;藉由該圖像採集設備的陀螺儀和加速度計採集到的資料確定該圖像採集設備的移動狀態,並基於該移動狀態和該初始相對位置關係,確定該圖像採集設備移動後與該車輛事故現場的即時相對位置關係;根據該即時相對位置關係,且不依據該圖像採集設備在移動時拍攝的圖像資料,在該圖像採集設備的拍攝介面中展示第一引導資訊,以引導用戶將該圖像採集設備移動至與標準相對位置關係相匹配的位置;確定鑑定結果,該鑑定結果是藉由將該圖像資料輸入事故鑑定模型而得到的輸出結果得到;該事故鑑定模型由歷史車輛事故現場的圖像資料,以及該歷史車輛事故現場的事故鑑定資訊訓練得到。 A vehicle accident identification method, comprising: acquiring image data of a vehicle accident scene, and determining an initial relative positional relationship between the vehicle accident scene and an image acquisition device according to the image data; The data collected by the gyroscope and the accelerometer determine the movement state of the image acquisition device, and based on the movement state and the initial relative positional relationship, determine the immediate relative positional relationship between the image acquisition device and the vehicle accident scene after moving; According to the real-time relative position relationship, and not based on the image data captured by the image capture device while moving, the first guide information is displayed in the capture interface of the image capture device to guide the user to move the image capture device to a position that matches the standard relative positional relationship; determine the identification result, which is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is obtained from the image data of the historical vehicle accident scene , and the accident identification information training of the historical vehicle accident scene. 根據請求項1所述的方法,該確定鑑定結果,包括:將該圖像資料輸入該事故鑑定模型,以將該事故鑑定模型的輸出結果作為該鑑定結果;或者,將該圖像資料發送至伺服端以使該伺服端將該圖像資料輸入該事故鑑定模型,以及將該伺服端返回的輸出結果作為該鑑定結果。 According to the method described in claim 1, the determining of the appraisal result includes: inputting the image data into the accident appraisal model, so as to use the output result of the accident appraisal model as the appraisal result; or, sending the image data to The server side enables the server side to input the image data into the accident identification model, and the output result returned by the server side is used as the identification result. 根據請求項1所述的方法,還包括:獲取圖像採集設備對該車輛事故現場的拍攝方位;確定該拍攝方位是否符合標準拍攝方位;當該拍攝方位不符合標準拍攝方位時,在該圖像採集設備的拍攝介面中展示第二引導資訊,以引導用戶將該圖像採集設備移動至該標準拍攝方位處。 The method according to claim 1, further comprising: acquiring the shooting orientation of the vehicle accident site by the image acquisition device; determining whether the shooting orientation conforms to the standard shooting orientation; when the shooting orientation does not conform to the standard shooting orientation, in the picture The second guide information is displayed in the shooting interface of the image capturing device to guide the user to move the image capturing device to the standard shooting position. 根據請求項3所述的方法,該獲取圖像採集設備對該車輛事故現場的拍攝方位,包括:獲取該圖像採集設備拍攝該車輛事故現場得到的即時圖像資料;將該即時圖像資料輸入拍攝方位確定模型,該拍攝方位確定模型由在預設拍攝方位下拍攝樣本事故車輛得到的圖像資料與該預設拍攝方位的對應關係訓練得到;將該拍攝方位確定模型的輸出結果作為該圖像採集設備對該車輛事故現場的拍攝方位。 According to the method described in claim 3, acquiring the photographing orientation of the vehicle accident site by the image acquisition device includes: acquiring real-time image data obtained by the image acquisition device shooting the vehicle accident site; Input the shooting orientation determination model, the shooting orientation determination model is obtained by training the corresponding relationship between the image data obtained by shooting the sample accident vehicle under the preset shooting orientation and the preset shooting orientation; the output result of the shooting orientation determination model is used as the The photographing orientation of the vehicle accident scene by the image acquisition device. 根據請求項3所述的方法,該在該圖像採集設備的拍攝介面中展示第二引導資訊,包括:按照預定義的拍攝流程依次在該拍攝介面中展示引導用戶將該圖像採集設備移動至各個標準拍攝方位的第二引導資訊;該拍攝流程包括針對車輛事故現場中各個拍攝對象的標準拍攝方位,以及拍攝該各個拍攝對象的順序。 According to the method of claim 3, displaying the second guide information in the photographing interface of the image capture device includes: sequentially displaying in the photographing interface according to a predefined photographing process to guide the user to move the image capture device second guide information to each standard shooting orientation; the shooting process includes standard shooting orientations for each shooting object in the vehicle accident scene, and the sequence of shooting the respective shooting objects. 根據請求項1所述的方法,該鑑定結果的參數包括以下至少之一:碰撞角度、碰撞前的行駛速度、損傷部位、損傷程度。 According to the method of claim 1, the parameters of the identification result include at least one of the following: collision angle, driving speed before collision, damage location, and damage degree. 一種車輛事故的鑑定裝置,包括:圖像獲取單元,獲取車輛事故現場的圖像資料;初始位置確定單元,根據該圖像資料確定該車輛事故現場與圖像採集設備之間的初始相對位置關係;移動狀態確定單元,藉由該圖像採集設備的陀螺儀和加速度計採集到的資料確定該圖像採集設備的移動狀態;即時位置確定單元,基於該移動狀態和該初始相對位置關係,確定該圖像採集設備移動後與該車輛事故現場的即時相對位置關係;第一展示單元,根據該即時相對位置關係,且不依據該圖像採集設備在移動時拍攝的圖像資料,在該圖像採集設備的拍攝介面中展示第一引導資訊,以引導用戶將該圖像採集設備移動至與標準相對位置關係相匹配的位置;結果確定單元,確定鑑定結果,該鑑定結果是藉由將該圖像資料輸入事故鑑定模型而得到的輸出結果得到;該事故鑑定模型由歷史車輛事故現場的圖像資料,以及該歷史車輛事故現場的事故鑑定資訊訓練得到。 A vehicle accident identification device, comprising: an image acquisition unit for acquiring image data of a vehicle accident scene; an initial position determination unit for determining an initial relative position relationship between the vehicle accident scene and an image acquisition device according to the image data ; The movement state determination unit determines the movement state of the image capture device by the data collected by the gyroscope and the accelerometer of the image capture device; the instant position determination unit determines, based on the movement state and the initial relative positional relationship, to determine The real-time relative positional relationship between the image acquisition device and the vehicle accident scene after moving; the first display unit, based on the real-time relative positional relationship, and not based on the image data captured by the image acquisition device while moving, is shown in this figure. The first guide information is displayed in the shooting interface of the image capture device to guide the user to move the image capture device to a position that matches the standard relative positional relationship; the result determination unit determines the appraisal result, and the appraisal result is determined by the The output result obtained by inputting the image data into the accident identification model is obtained; the accident identification model is obtained by training the image data of the historical vehicle accident scene and the accident identification information of the historical vehicle accident scene. 根據請求項7所述的裝置,該結果確定單元具體用於:將該圖像資料輸入該事故鑑定模型,以將該事故鑑定模型的輸出結果作為該鑑定結果;或者,將該圖像資料發送至伺服端以使該伺服端將該圖像資料輸入該事故鑑定模型,以及將該伺服端返回的輸出結果作為該鑑定結果。 According to the device of claim 7, the result determination unit is specifically configured to: input the image data into the accident identification model, so as to use the output result of the accident identification model as the identification result; or, send the image data to the server to enable the server to input the image data into the accident identification model, and to use the output result returned by the server as the identification result. 根據請求項7所述的裝置,還包括:方位獲取單元,獲取圖像採集設備對該車輛事故現場的拍攝方位;方位確定單元,確定該拍攝方位是否符合標準拍攝方位;第二展示單元,當該拍攝方位不符合標準拍攝方位時,在該圖像採集設備的拍攝介面中展示第二引導資訊,以引導用戶將該圖像採集設備移動至該標準拍攝方位處。 The device according to claim 7, further comprising: an orientation acquisition unit, which acquires the shooting orientation of the vehicle accident site by the image acquisition device; an orientation determination unit, which determines whether the shooting orientation conforms to the standard shooting orientation; a second display unit, when When the shooting orientation does not conform to the standard shooting orientation, second guidance information is displayed in the shooting interface of the image capturing device to guide the user to move the image capturing device to the standard shooting orientation. 根據請求項9所述的裝置,該方位獲取單元具體用於:獲取該圖像採集設備拍攝該車輛事故現場得到的即時圖像資料;將該即時圖像資料輸入拍攝方位確定模型,該拍攝方位確定模型由在預設拍攝方位下拍攝樣本事故車輛得到的圖像資料與該預設拍攝方位的對應關係訓練得到;將該拍攝方位確定模型的輸出結果作為該圖像採集設備對該車輛事故現場的拍攝方位。 According to the device described in claim 9, the orientation acquisition unit is specifically configured to: acquire real-time image data obtained by the image acquisition device shooting the vehicle accident scene; input the real-time image data into a shooting orientation determination model, and the shooting orientation The determination model is obtained by training the corresponding relationship between the image data obtained by shooting the sample accident vehicle under the preset shooting orientation and the preset shooting orientation; the output result of the shooting orientation determination model is used as the image acquisition device for the vehicle accident scene. shooting position. 根據請求項9所述的裝置,該第二展示單元具體用於:按照預定義的拍攝流程依次在該拍攝介面中展示引導用戶將該圖像採集設備移動至各個標準拍攝方位的第二引導資訊;該拍攝流程包括針對車輛事故現場中各個拍攝對象的標準拍攝方位,以及拍攝該各個拍攝對象的順序。 According to the device of claim 9, the second display unit is specifically configured to: sequentially display the second guide information in the shooting interface that guides the user to move the image capture device to each standard shooting orientation in the shooting interface according to the predefined shooting process ; The photographing process includes standard photographing orientations for each photographed object in the vehicle accident scene, and the sequence of photographing the various photographed objects. 根據請求項7所述的裝置,該鑑定結果 的參數包括以下至少之一:碰撞角度、碰撞前的行駛速度、損傷部位、損傷程度。 According to the device of claim 7, the authentication result The parameters include at least one of the following: collision angle, driving speed before collision, damage location, and damage degree. 一種電子設備,包括:處理器;用於儲存該處理器可執行指令的儲存器;其中,該處理器藉由運行該可執行指令以實現如請求項1-6中任一項所述的方法。 An electronic device, comprising: a processor; a memory for storing executable instructions of the processor; wherein, the processor implements the method as described in any one of claim 1-6 by running the executable instructions .
TW108133384A 2019-03-07 2019-09-17 Vehicle accident identification method and device, electronic equipment TWI770420B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910171587.5A CN110033386B (en) 2019-03-07 2019-03-07 Vehicle accident identification method and device and electronic equipment
CN201910171587.5 2019-03-07

Publications (2)

Publication Number Publication Date
TW202034270A TW202034270A (en) 2020-09-16
TWI770420B true TWI770420B (en) 2022-07-11

Family

ID=67235093

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108133384A TWI770420B (en) 2019-03-07 2019-09-17 Vehicle accident identification method and device, electronic equipment

Country Status (3)

Country Link
CN (1) CN110033386B (en)
TW (1) TWI770420B (en)
WO (1) WO2020177480A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033386B (en) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 Vehicle accident identification method and device and electronic equipment
CN111079506A (en) * 2019-10-11 2020-04-28 深圳壹账通智能科技有限公司 Augmented reality-based information acquisition method and device and computer equipment
CN110809088A (en) * 2019-10-25 2020-02-18 广东以诺通讯有限公司 Traffic accident photographing method and system based on mobile phone app
CN110650292B (en) * 2019-10-30 2021-03-02 支付宝(杭州)信息技术有限公司 Method and device for assisting user in shooting vehicle video
CN112434368A (en) * 2020-10-20 2021-03-02 联保(北京)科技有限公司 Image acquisition method, device and storage medium
CN112465018B (en) * 2020-11-26 2024-02-02 深源恒际科技有限公司 Intelligent screenshot method and system of vehicle video damage assessment system based on deep learning
CN112492105B (en) * 2020-11-26 2022-04-15 深源恒际科技有限公司 Video-based vehicle appearance part self-service damage assessment acquisition method and system
CN114764979A (en) * 2021-01-14 2022-07-19 大陆泰密克汽车***(上海)有限公司 Accident information warning system and method, electronic device and storage medium
CN113255842B (en) * 2021-07-05 2021-11-02 平安科技(深圳)有限公司 Vehicle replacement prediction method, device, equipment and storage medium
CN114637438B (en) * 2022-03-23 2024-05-07 支付宝(杭州)信息技术有限公司 AR-based vehicle accident handling method and device
CN114724373B (en) * 2022-04-15 2023-06-27 地平线征程(杭州)人工智能科技有限公司 Traffic field information acquisition method and device, electronic equipment and storage medium
CN114715146A (en) * 2022-05-09 2022-07-08 吉林大学 Method for predicting severity of potential collision accident

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510196B1 (en) * 2012-08-16 2013-08-13 Allstate Insurance Company Feedback loop in mobile damage assessment and claims processing
CN103702029A (en) * 2013-12-20 2014-04-02 百度在线网络技术(北京)有限公司 Method and device for prompting focusing during shooting
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures
US10089396B2 (en) * 2014-07-30 2018-10-02 NthGen Software Inc. System and method of a dynamic interface for capturing vehicle data
TW201839704A (en) * 2017-04-11 2018-11-01 香港商阿里巴巴集團服務有限公司 Image-based vehicle damage determining method, apparatus, and electronic device
TW201839666A (en) * 2017-04-28 2018-11-01 香港商阿里巴巴集團服務有限公司 Vehicle loss assessment image obtaining method and apparatus, server and terminal device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311283B2 (en) * 2008-07-06 2012-11-13 Automotive Research&Testing Center Method for detecting lane departure and apparatus thereof
CN103646534B (en) * 2013-11-22 2015-12-02 江苏大学 A kind of road real-time traffic accident risk control method
CN106373395A (en) * 2016-09-20 2017-02-01 三星电子(中国)研发中心 Driving accident monitoring method and apparatus
CN108629963A (en) * 2017-03-24 2018-10-09 纵目科技(上海)股份有限公司 Traffic accident report method based on convolutional neural networks and system, car-mounted terminal
CN107368776B (en) * 2017-04-28 2020-07-03 阿里巴巴集团控股有限公司 Vehicle loss assessment image acquisition method and device, server and terminal equipment
CN109325488A (en) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 For assisting the method, device and equipment of car damage identification image taking
CN109359542A (en) * 2018-09-18 2019-02-19 平安科技(深圳)有限公司 The determination method and terminal device of vehicle damage rank neural network based
CN109344819A (en) * 2018-12-13 2019-02-15 深源恒际科技有限公司 Vehicle damage recognition methods based on deep learning
CN110033386B (en) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 Vehicle accident identification method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510196B1 (en) * 2012-08-16 2013-08-13 Allstate Insurance Company Feedback loop in mobile damage assessment and claims processing
CN103702029A (en) * 2013-12-20 2014-04-02 百度在线网络技术(北京)有限公司 Method and device for prompting focusing during shooting
US10089396B2 (en) * 2014-07-30 2018-10-02 NthGen Software Inc. System and method of a dynamic interface for capturing vehicle data
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures
TW201839704A (en) * 2017-04-11 2018-11-01 香港商阿里巴巴集團服務有限公司 Image-based vehicle damage determining method, apparatus, and electronic device
TW201839666A (en) * 2017-04-28 2018-11-01 香港商阿里巴巴集團服務有限公司 Vehicle loss assessment image obtaining method and apparatus, server and terminal device

Also Published As

Publication number Publication date
TW202034270A (en) 2020-09-16
WO2020177480A1 (en) 2020-09-10
CN110033386B (en) 2020-10-02
CN110033386A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
TWI770420B (en) Vehicle accident identification method and device, electronic equipment
US10817956B2 (en) Image-based vehicle damage determining method and apparatus, and electronic device
EP3520045B1 (en) Image-based vehicle loss assessment method, apparatus, and system, and electronic device
WO2019223382A1 (en) Method for estimating monocular depth, apparatus and device therefor, and storage medium
US11328401B2 (en) Stationary object detecting method, apparatus and electronic device
CN108280477B (en) Method and apparatus for clustering images
WO2019020103A1 (en) Target recognition method and apparatus, storage medium and electronic device
WO2018191421A1 (en) Image-based vehicle damage determining method, apparatus, and electronic device
TWI712980B (en) Claim information extraction method and device, and electronic equipment
CN110660102B (en) Speaker recognition method, device and system based on artificial intelligence
EP4178194A1 (en) Video generation method and apparatus, and readable medium and electronic device
CN111950370B (en) Dynamic environment offline visual milemeter expansion method
CN114663871A (en) Image recognition method, training method, device, system and storage medium
US10198842B2 (en) Method of generating a synthetic image
CN109829401A (en) Traffic sign recognition method and device based on double capture apparatus
CN111310595B (en) Method and device for generating information
CN110348369B (en) Video scene classification method and device, mobile terminal and storage medium
CN113721240B (en) Target association method, device, electronic equipment and storage medium
CN115346270A (en) Traffic police gesture recognition method and device, electronic equipment and storage medium
US20210095980A1 (en) Enhanced localization
US11521331B2 (en) Method and apparatus for generating position information, device, and medium
CN113989274A (en) Real-time human face quality evaluation method and device and storage medium
CN115471549A (en) Method, device and equipment for predicting position frame of target in image and storage medium
CN115171159A (en) Camera-based predictive tracking method and device, electronic equipment and medium
CN116797962A (en) Target detection method and system based on similar feature cross-domain fusion