TW202018664A - Image labeling method, device, and system - Google Patents

Image labeling method, device, and system Download PDF

Info

Publication number
TW202018664A
TW202018664A TW108110062A TW108110062A TW202018664A TW 202018664 A TW202018664 A TW 202018664A TW 108110062 A TW108110062 A TW 108110062A TW 108110062 A TW108110062 A TW 108110062A TW 202018664 A TW202018664 A TW 202018664A
Authority
TW
Taiwan
Prior art keywords
damage
image
car
vehicle
preset
Prior art date
Application number
TW108110062A
Other languages
Chinese (zh)
Inventor
周凡
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW202018664A publication Critical patent/TW202018664A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

An image labeling method, device, and system. The method comprises: obtaining a vehicle damage image of a preset damage area captured by a camera device; obtaining physical attribute information obtained by scanning the preset damage area by a physical detection mode; and labeling damage on the vehicle damage image on the basis of the physical attribute information to generate vehicle damage sample data. A visual image obtained by a camera device and physical attribute information obtained by a physical detection mode are combined to automatically generate label data required for training a model, there is no need to manually label damage on a vehicle damage image, and damage can be labeled on the vehicle damage image at a pixel level; thus, the labeling efficiency and accuracy on the vehicle damage image are improved, and massive accurate label sample data can be provided for a deep learning-based model training process, so as to train a vehicle damage recognition model having higher recognition accuracy.

Description

圖像標註方法、裝置及系統Image annotation method, device and system

本說明書一個或多個涉及智慧識別技術領域,尤其涉及一種圖像標註方法、裝置及系統。One or more of this specification relates to the field of smart recognition technology, and in particular, to an image annotation method, device, and system.

目前,隨著社會經濟的快速增長,中國高收入人群的增多,由於車輛給人們的出行帶來了很大的方便,因而車輛的擁有數量也在隨之不斷增長,隨之交通事故的發生也越來越頻繁,人們為了減少因交通事故帶來的損失,通常定期給車輛繳納必要的車險,這樣車主發生車輛事故後,可以提出理賠申請,然後保險公司需要對車輛的損傷程度進行評估,以確定需要修復的項目清單以及賠付金額,具體的,需要專業定損人員對現場採集的車損圖像進行綜合分析,進而對車輛碰撞修復進行科學系統的估損定價。 當前,為了提高車損圖像的損傷長度進行快速識別,採用基於深度學習來識別車輛損傷程度,即基於預先訓練好的車損識別模型對現場採集的車損圖像進行智慧識別,自動輸出識別得到的損傷程度分析結果,其中,在訓練得到車損識別模型時,需要獲取大量的標註好的車損樣本資料,通常針對每個子問題需要10萬~1000萬量級的標註資料,即事先對各種類型、材質的損傷進行標註,標明車損圖像中各子區域對應的損傷程度,現有技術中對採集的大量車損圖像進行人工標註,這樣存在標註效率低、人工成本高、人為因素影響大、準確度低的問題,難以在短時間內產生訓練模型所需的大量標註資料。 因此,需要提供一種效率高、準確度高、人工成本低的車損圖像標註方法及裝置。At present, with the rapid growth of the social economy, the number of high-income people in China has increased. As vehicles have brought great convenience to people’s travel, the number of vehicles has also increased, and the occurrence of traffic accidents has also occurred. More and more frequently, in order to reduce the losses caused by traffic accidents, people usually pay the necessary car insurance regularly to the vehicle, so that the owner can file a claim after the vehicle accident, and then the insurance company needs to assess the degree of damage to the vehicle to Determine the list of items that need to be repaired and the amount of compensation. Specifically, professional damage estimation personnel are required to conduct a comprehensive analysis of the car damage images collected on the spot, and then scientifically estimate the price of the vehicle collision repair. At present, in order to increase the damage length of car damage images for rapid recognition, deep learning is used to identify the degree of vehicle damage, that is, based on a pre-trained car damage recognition model to intelligently recognize the car damage images collected on site, and automatically output recognition The obtained damage degree analysis results. Among them, when training to obtain a car damage recognition model, a large amount of labeled car damage sample data needs to be obtained, usually for each sub-problem. Annotations of various types and materials of damage indicate the degree of damage corresponding to each sub-region in the car damage image. In the prior art, a large number of collected car damage images are manually marked, which has low labeling efficiency, high labor cost, and human factors. The problem of large impact and low accuracy makes it difficult to generate a large amount of labeled data required for training models in a short time. Therefore, there is a need to provide a method and device for annotating car damage images with high efficiency, high accuracy, and low labor cost.

本說明書一個或多個實施例的目的是提供一種圖像標註方法、裝置及系統,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 為解決上述技術問題,本說明書一個或多個實施例是這樣實現的: 本說明書一個或多個實施例提供了一種圖像標註方法,包括: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例提供了一種圖像標註裝置,包括: 第一獲取模組,用於獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 第二獲取模組,用於獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 圖像標註模組,用於根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例提供了一種圖像標註系統,包括:攝像裝置、物理探測裝置和上述圖像標註裝置,其中,所述攝像裝置和所述物理探測裝置均與所述圖像標註裝置相連接; 所述攝像裝置,用於對目標車輛上預設損傷區域進行拍攝得到的車損圖像,並將所述車損圖像傳輸至所述圖像標註裝置; 所述物理探測裝置,用於基於物理探測方式對所述預設損傷區域進行掃描得到的物理屬性資訊,並將所述物理屬性資訊傳輸至所述圖像標註裝置; 所述圖像標註裝置,用於接收所述車損圖像和所述圖像標註裝置,並根據所述車損圖像和所述圖像標註裝置產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例提供了一種圖像標註設備,包括:處理器;以及 被安排成儲存電腦可執行指令的記憶體,所述可執行指令在被執行時使所述處理器: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例提供了一種儲存媒體,用於儲存電腦可執行指令,所述可執行指令在被執行時實現以下流程: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例中的圖像標註方法、裝置及系統,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。The purpose of one or more embodiments of this specification is to provide an image annotation method, device and system, which automatically generates a training model by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method The required labeling data does not need to manually mark the damage of the car damage image. It can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the car damage image labeling. The process of model training based on deep learning provides a large amount of accurate labeled sample data in order to train to obtain a car damage recognition model with higher recognition accuracy. To solve the above technical problems, one or more embodiments of this specification are implemented as follows: One or more embodiments of this specification provide an image annotation method, including: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model. One or more embodiments of this specification provide an image annotation device, including: A first acquisition module, configured to acquire a car damage image of a preset damage area on the target vehicle captured by the camera device; and, A second acquisition module, configured to acquire physical attribute information obtained by scanning the preset damage area based on a physical detection method; An image tagging module is used for tagging the car damage image according to the physical attribute information, and generating car damage sample data for training a car damage recognition model. One or more embodiments of this specification provide an image tagging system, including: a camera device, a physical detection device, and the above image tagging device, wherein the camera device and the physical detection device are both tagged with the image The device is connected; The camera device is used to capture a car damage image obtained from a predetermined damage area on the target vehicle, and transmit the car damage image to the image tagging device; The physical detection device is configured to scan physical attribute information obtained by scanning the preset damage area based on a physical detection method, and transmit the physical attribute information to the image annotation device; The image tagging device is used for receiving the car damage image and the image tagging device, and generating a car damage for training a car damage recognition model according to the car damage image and the image tagging device Sample information. One or more embodiments of this specification provide an image annotation device, including: a processor; and A memory arranged to store computer executable instructions which, when executed, causes the processor to: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model. One or more embodiments of this specification provide a storage medium for storing computer-executable instructions, which when executed are implemented as follows: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model. An image annotation method, device, and system in one or more embodiments of this specification, to obtain a car damage image of a preset damage area captured by a camera device; and, to obtain a scan of the preset damage area based on a physical detection method Obtained physical attribute information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model.

為了使本技術領域的人員更好地理解本說明書一個或多個中的技術方案,下面將結合本說明書一個或多個實施例中的圖式,對本說明書一個或多個實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本說明書一個或多個一部分實施例,而不是全部的實施例。基於本說明書一個或多個中的實施例,本領域具有通常知識者在沒有作出創造性勞動前提下所獲得的所有其他實施例,都應當屬於本說明書一個或多個保護的範圍。 本說明書一個或多個實施例提供了一種圖像標註方法、裝置及系統,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 圖1為本說明書一個或多個實施例提供的圖像標註方法的應用場景示意圖,如圖1所示,該系統包括:攝像裝置、物理探測裝置和圖像標註裝置,其中,該攝像裝置和物理探測裝置均與圖像標註裝置相連接;該攝像裝置可以是數位相機等具有拍照功能的設備,該物理探測裝置可以是雷射光雷達裝置、紅外線熱成像裝置等等,該圖像標註裝置可以是用於車損圖像進行標註的後臺伺服器,其中,圖像標註的具體過程為: (1)將具有預設損傷區域的目標車輛駛入工作場地內工作臺的指定位置,具體的,操作人員根據車輛受損部位,對資料採集範圍進行初始設定,即設置攝像裝置的初始拍攝範圍,以及設置物理探測裝置的初始探測範圍,例如,車輛的前保險桿受損,即預設損傷區域為車輛前半球,則將拍攝範圍和物理探測範圍設置為車輛前半球; (2)藉由攝像裝置對目標車輛上預設損傷區域進行拍攝得到車損圖像,並將該車損圖像傳輸至圖像標註裝置,其中,攝像裝置可以設置於可調節雲台上,藉由可調節雲台調整攝像裝置的拍攝條件; (3)藉由物理探測裝置基於物理探測方式對上述預設損傷區域進行掃描得到物理屬性資訊,並將該物理屬性資訊傳輸至圖像標註裝置,其中,物理探測裝置也設置於可調節雲台上,且物理探測裝置與攝像裝置的相對位置保持不變,保證在同一拍攝條件下同時獲取車損圖像和對應的物理屬性資訊; (4)圖像標註裝置根據獲取到的物理屬性資訊對車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,這樣無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 圖2為本說明書一個或多個實施例提供的圖像標註方法的第一種流程示意圖,圖2中的方法能夠由圖1中的圖像標註裝置執行,如圖2所示,該方法至少包括以下步驟: S201,獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像; 具體的,攝像裝置對目標車輛上預設損傷區域進行拍照,得到針對該預設損傷區域的視覺圖像,並將該視覺圖像傳輸至圖像標註裝置; S202,獲取基於物理探測方式針對上述預設損傷區域進行掃描得到的物理屬性資訊; 具體的,物理探測裝置對目標車輛上預設損傷區域進行掃描,得到針對該預設損傷區域的物理屬性資訊,並將該物理屬性資訊傳輸至圖像標註裝置; 其中,上述物理探測裝置可以是雷射光雷達裝置,對應的,上述物理探測方式可以是雷射光雷達探測方式; 上述物理探測裝置還可以是紅外線熱成像裝置,對應的,上述物理探測方式還可以是紅外線探測方式; 上述物理探測裝置也可以是雷射光雷達裝置和紅外線熱成像裝置的組合,對應的,上述物理探測方式也可以是雷射光雷達探測方式和紅外線探測方式的結合; 另外,物理探測裝置又可以是採用其他物理探測方式掃描採集物理屬性資訊的裝置; S203,根據獲取到的物理屬性資訊對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,其中,該車損樣本資料可以包括:車損圖像和針對該車損圖像的標註資料,該標註資料可以包括:車損圖像中各像素點的損傷情況; 具體的,針對每個車損圖像,獲取與該車損圖像對應的物理屬性資訊,根據該物理屬性資訊,確定車損圖像中各像素點的損傷情況,將確定出的車損圖像中各像素點的損傷情況作為針對該車損圖像的標註資料;其中,該像素點的損傷情況可以是表徵像素點是否損傷的資料,也可以是表徵像素點損傷大小的資料,還可以是表徵像素點損傷程度的資料。 本說明書一個或多個實施例中,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 其中,利用雷射光雷達探測技術得到的物理屬性資訊為各像素點的三維深度資訊,基於該三維深度資訊能夠對損傷表面的變形程度的評估,即能夠對預設損傷區域的凹陷、破損的位置進行高準確度識別,而利用紅外線探測技術得到的物理屬性資訊為各像素點的表面熱成像資訊,由於不同材質的紅外線熱成像存在一定差異,因此,結合探測到的表面熱成像資訊可以確定預設損傷區域的表面材質分佈,因而基於該表面熱成像資訊能夠對損傷表面的刮擦程度的評估,即能夠對預設損傷區域的刮擦損傷的範圍進行高準確度識別; 進一步的,考慮到不同物理探測技術對預設損傷區域的損傷評估側重點不同,因此,藉由不同物理探測技術相結合的方式來對預設損傷區域的損傷情況進行評估,能夠提高對損傷表面的損傷情況的評估準確度,基於此,如圖3所示,以雷射光雷達探測技術和紅外線探測技術相結合的方式為例,上述S202獲取基於物理探測方式針對上述預設損傷區域進行掃描得到的物理屬性資訊,具體包括: S2021,獲取利用雷射光雷達裝置針對目標車輛上預設損傷區域進行掃描得到的三維深度資訊; 具體的,雷射光雷達裝置對目標車輛上預設損傷區域進行掃描,得到針對該預設損傷區域的三維深度資訊,並將該三維深度資訊傳輸至圖像標註裝置; 如圖4a所示,雷射光雷達裝置包括:第一處理單元、雷射光發射單元和雷射光接收單元; 具體的,雷射光發射單元用於向預設損傷區域發射雷射光光束(即探測信號),雷射光光束到達預設損傷區域後將產生反射,雷射光接收單元接收預設損傷區域返回的反射光束(即目標回波),並將該目標回波傳輸至第一處理單元,第一處理單元將接收到的從預設損傷區域反射回來的目標回波與向預設損傷區域發射的探測信號進行比對,產生針對預設損傷區域的三維深度資訊,基於三維深度資訊可以繪製得到用於表徵車損圖像中各像素點的深度資訊的三維表面圖,實現以發射雷射光光束探測目標車輛上預設損傷區域的損傷表面中各點的相對位置資訊; S2022,獲取利用紅外線熱成像裝置針對目標車輛上預設損傷區域進行掃描得到的表面熱成像資訊; 具體的,紅外線熱成像裝置對目標車輛上預設損傷區域進行掃描,得到針對該預設損傷區域的表面熱成像資訊,並將該表面熱成像資訊傳輸至圖像標註裝置; 如圖4b所示,紅外線熱成像裝置包括:第二處理單元、紅外線發射單元和紅外線接收單元; 具體的,紅外線發射單元用於向預設損傷區域發射紅外線光束(即探測信號),紅外線光束到達預設損傷區域後將產生反射,紅外線接收單元接收預設損傷區域返回的反射光束(即目標回波),並將該目標回波傳輸至第一處理單元,第二處理單元將接收到的從預設損傷區域反射回來的目標回波與向預設損傷區域發射的探測信號進行比對,產生針對預設損傷區域的表面熱成像資訊,基於表面熱成像資訊可以繪製得到用於表徵車損圖像中各像素點的輻射能量的表面熱成像圖,實現以發射紅外線光束探測目標車輛上預設損傷區域的損傷表面的材質分佈; 對應的,針對同時利用雷射光雷達探測技術和紅外線探測技術對預設損傷區域進行損傷評估的情況,上述S203根據獲取到的物理屬性資訊對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,具體包括: S2031,根據獲取到的三維深度資訊和表面熱成像資訊,對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料; 具體的,根據獲取到的三維深度資訊,確定車損圖像中各像素點的深度資訊;再針對每個像素點,根據該像素點的深度資訊,確定該像素點的凹凸情況(即變形情況); 根據獲取到的表面熱成像資訊,確定車損圖像中各像素點對應的輻射能量;針對每個像素點,根據該像素點對應的輻射能量,確定該像素點的刮擦情況(即掉漆情況); 將確定出的各像素點的凹凸情況、刮擦情況和車損圖像確定為用於訓練車損識別模型的車損樣本資料; 本說明書一個或多個實施例中,結合雷射光雷達探測技術和紅外線探測技術這兩個物理探測維度,實現同時對預設損傷區域的變形程度和刮擦程度進行綜合識別,提高了車損圖像的損傷標註精準度,進而有利於提高基於標註後的車損圖像訓練得到的識別模型的識別準確度。 進一步的,考慮到拍攝條件可能會影響得到的車損圖像和物理屬性資訊的質量,從而可能降低對預設損傷區域的損傷標註的準確度,因此,為了提高對預設損傷區域的損傷情況評估的精準度,在攝像裝置採集車損圖像以及物理探測裝置採集物理屬性資訊的過程中,基於預設調節規則對拍攝條件進行調整,實現在不同拍攝條件下獲取預設損傷區域的多張車損圖像和多個物理屬性資訊,基於此,如圖5所示,上述S201獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像,具體包括: S2011,獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,該車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 其中,上述拍攝條件包括:攝像裝置的拍攝方位、攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種,該拍攝方位可以包括:拍攝角度和拍攝朝向,該光照參數可以包括:光源數量和光照情況,因此,不同拍攝條件可以是攝像裝置的拍攝方位、攝像裝置與目標車輛的相對位置、以及拍攝環境的光照參數中至少一項不同的多個拍攝條件,每個拍攝條件對應於針對預設損傷區域的一張車損圖像;也就是說,獲取到的車損圖像均為實際拍攝得到的,而不是基於原始圖像進行不同的圖像處理得到的,其中,實際拍攝得到的圖像的特徵分佈更加符合現實場景,這樣對於深度學習來說具有更好的訓練效果; 其中,上述預設調節規則可以是根據操作人員輸入的步長、自動拍攝設備進行光照、距離和角度的改變的設定資訊確定的,針對同一處損傷區域,自動完成一組數百至數千張車損圖片的拍攝和標註,例如,對同一處損傷區域,按照每次左右移動30cm、角度改變10度、光照從500流明提升到3000流明、每次提升100流明的調節規則來調整拍攝條件,採集各拍攝條件下的車損圖像和物理屬性資訊; 進一步的,上述預設調節規則可以是基於車損識別模型的識別準確度確定的,當車損識別模型的識別準確度不理想時,可以藉由修改預設調節規則來優化拍攝條件,從而優化最終得到的車損樣本資料,進而提高車損識別模型的識別準確度。 具體的,由於攝像裝置設置於圖1中的可調節雲台上,再藉由控制設備對可調節雲台進行調節,以藉由可調節雲台調整攝像裝置的拍攝方位,並且如果在可調節雲台下方安裝有輪式或履帶式行走機構,可藉由控制設備控制可調節雲台在工作場地內按照控制指令前後左右移動,以藉由可調節雲台調整攝像裝置與目標車輛的相對位置;其中,該控制設備可以是單獨的控制裝置,還可以是設置於圖像標註裝置中的控制模組; 另外,針對上述拍攝條件包括拍攝環境的光照參數的情況,需要在藉由攝像裝置採集針對預設損傷區域的車損圖像、以及藉由物理探測裝置採集針對預設損傷區域的物理屬性資訊的過程中,調節拍攝環境的光照參數,如圖6a所示,給出了圖像標註方法的第二種應用場景示意圖,具體為: 在工作場地的指定位置設置一光照調節裝置,控制設備根據預設拍攝參數控制光照調節裝置發出的光照強度,以調整攝像裝置所在的拍攝環境的光照參數,進而使得攝像裝置在不同光照參數下拍攝相應的車損圖像; 對應的,針對每張車損圖像,均需要採集與該車損圖像對應的物理屬性資訊,針對物理屬性資訊的採集過程,仍以物理探測方式包括:雷射光雷達探測方式和紅外線探測方式為例,上述S2021獲取利用雷射光雷達裝置針對目標車輛上預設損傷區域進行掃描得到的三維深度資訊,具體包括: S20211,獲取針對目標車輛上預設損傷區域的三維深度資訊集合,其中,該三維深度資訊集合包括:利用雷射光雷達探測方式在不同拍攝條件下掃描得到的多個三維表面圖; 具體的,針對每個拍攝條件,不僅藉由攝像裝置採集預設損傷區域的車損圖像,同時還藉由雷射光雷達裝置採集該預設損傷區域的三維深度資訊(即三維表面圖),因此,三維深度資訊集合中的每一個三維表面圖是在一個特定拍攝條件下得到的; 對應的,上述S2022獲取利用紅外線熱成像裝置針對目標車輛上預設損傷區域進行掃描得到的表面熱成像資訊,具體包括: S20221,獲取針對目標車輛上預設損傷區域的表面熱成像資訊集合,其中,該表面熱成像資訊集合包括:利用紅外線探測方式在不同拍攝條件下掃描得到的多個表面熱成像圖; 具體的,針對每個拍攝條件,不僅藉由攝像裝置採集預設損傷區域的車損圖像以及藉由雷射光雷達裝置採集該預設損傷區域的三維深度資訊(即三維表面圖),同時還藉由紅外線熱成像裝置採集該預設損傷區域的表面熱成像資訊(即表面熱成像圖),因此,表面熱成像資訊集合中的每一個表面熱成像圖是在一個特定拍攝條件下得到的; 對應的,上述S203根據獲取到的三維深度資訊和表面熱成像資訊,對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,具體包括: S20311,根據獲取到的三維深度資訊集合和表面熱成像資訊集合,分別對各拍攝條件下的車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料; 也就是說,針對某一預設損傷區域,在某一拍攝條件下,獲取該拍攝條件對應的車損圖像、三維表面圖和表面熱成像圖,並建立拍攝條件、車損圖像、三維表面圖和表面熱成像圖之間的對應關係,再根據針對預設損傷區域獲取的各拍攝條件對應的車損圖像、三維表面圖和表面熱成像圖,確定針對預設損傷區域的車損樣本資料。 其中,為了保證藉由物理探測裝置獲取到的物理屬性資訊與藉由攝像裝置獲取到的車損圖像上的像素點一一匹配,根據攝像裝置與物理探測裝置的相對位置、以及攝像裝置的觀景窗內每個像素點的拍攝範圍,確定物理探測裝置的掃描範圍;進一步為了保證同一拍攝條件下的車損圖像和物理屬性資訊一一對應,物理探測裝置和攝像裝置的相對位置不變,控制設備藉由可調節雲台對攝像裝置和物理探測裝置進行同步調整,因此,在同一拍攝條件下,同時藉由攝像裝置採集針對預設損傷區域的車損圖像、以及藉由物理探測裝置採集針對預設損傷區域的物理屬性資訊; 另外,由於拍攝點相對目標車輛的中心點及損傷部位的坐標是已知的,基於空間幾何計算,可知所拍攝的車損圖像中,每一個像素點是否位於某種損傷範圍內。 本說明書一個或多個實施例中,在攝像裝置採集車損圖像以及物理探測裝置採集物理屬性資訊的過程中,基於預設調節規則對拍攝條件進行調整,實現在不同拍攝條件下獲取預設損傷區域的多張車損圖像和多個物理屬性資訊,針對每個拍攝條件,根據該拍攝條件下採集的物理屬性資訊,對在拍攝條件下拍攝得到的車損圖像進行損傷標註,產生車損樣本資料,這樣對於一個預設損傷區域而言,得到了在多個拍攝條件下產生的車損樣本資料,從而能夠提高對該預設損傷區域的損傷標註的準確度,進一步提高了基於該車損樣本資料訓練得到的車損識別模型的識別準確度。 其中,為了保證目標車輛的移動準確度,提高攝像裝置與目標車輛的相對位置的定位準確度,上述攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對目標車輛的移動進行控制得到的,如圖6b所示,給出了圖像標註方法的第三種應用場景示意圖,具體為: 在工作場地的指定位置設置一無線定位裝置,無線定位裝置獲取目標車輛移動前的第一位置資訊和移動後的第二位置資訊,根據第一位置資訊和第二位置資訊確定目標車輛的實際移動距離,將該實際移動距離與理論移動距離進行比對,確定目標車輛的移動誤差是否滿足預設條件,若否,則向控制設備發送相應的提示資訊,以使控制設備對目標車輛進行準確定位,其中,無線定位裝置可以是基於無線電信號的定位裝置,也可以是基於藍牙信號的定位裝置,還可以是基於雷射光雷達的定位裝置。 其中,針對車損圖像的損傷標註過程,如圖7所示,上述S203根據獲取到的物理屬性資訊對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,具體包括: S2032,根據獲取到的物理屬性資訊,確定針對預設損傷區域的車損圖像中各像素點的損傷情況; 具體的,針對物理屬性資訊為三維深度資訊的情況,根據獲取到的針對預設損傷區域的三維表面圖,確定針對該預設損傷區域的車損圖像中各像素點的深度資訊;根據各像素點的深度資訊,確定車損圖像中各像素點的變形情況(即凹凸情況); 針對物理屬性資訊為表面熱成像資訊的情況,根據獲取到的針對預設損傷區域的表面熱成像圖,確定針對該預設損傷區域的車損圖像中各像素點的輻射能量;根據各像素點的輻射能量,確定車損圖像中各像素點的刮擦情況(即掉漆情況); S2033,將確定出的各像素點的損傷情況和車損圖像確定為用於訓練車損識別模型的車損樣本資料; 具體的,將車損圖像中各像素點的損傷情況作為針對車損圖像的標註資料,建立車損圖像和標註資料之間的對應關係,將該對應關係、車損圖像、標註資料輸入至待訓練的基於有監督學習模式的機器學習模型。 其中,針對在不同拍攝條件下獲取車損圖像和各車損圖像對應的物理屬性資訊的情況,即上述目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像,以及上述針對預設損傷區域的物理屬性資訊包括:在不同拍攝條件下掃描得到的多份物理屬性資訊; 對應的,上述S2032根據獲取到的物理屬性資訊,確定針對預設損傷區域的車損圖像中各像素點的損傷情況,具體包括: 針對每張車損圖像,根據在該車損圖像對應的拍攝條件下得到的物理屬性資訊,確定該車損圖像中各像素點的損傷情況。 具體的,為了提高對預設損傷區域的損傷情況評估的精準度,在攝像裝置採集車損圖像以及物理探測裝置採集物理屬性資訊的過程中,基於預設調節規則對拍攝條件進行調整,實現在不同拍攝條件下獲取預設損傷區域的多張車損圖像和多個物理屬性資訊,因此,在對車損圖像進行損傷標註時,需要針對每張車損圖像,確定在該車損圖像對應的拍攝條件下得到的物理屬性資訊;再根據該物理屬性資訊,確定該車損圖像中各像素點的損傷情況; 其中,根據該物理屬性資訊,確定車損圖像中各像素點的損傷情況的過程,具體為: (1)針對物理屬性資訊為三維深度資訊的情況,根據在該車損圖像對應的拍攝條件下得到的三維表面圖,確定該車損圖像中各像素點的深度資訊;根據各像素點的深度資訊,確定車損圖像中各像素點的變形情況(即凹凸情況); (2)針對物理屬性資訊為表面熱成像資訊的情況,根據在該車損圖像對應的拍攝條件下得到的表面熱成像圖,確定該車損圖像中各像素點的輻射能量;根據各像素點的輻射能量,確定車損圖像中各像素點的刮擦情況(即掉漆情況); 具體的,在確定出各車損圖像中各像素點的損傷情況後,在某一拍攝條件下的每一個車損圖像和該車損圖像中各像素點的損傷情況作為一份車損樣本資料,因此,針對同一預設損傷區域,得到在不同拍攝條件下的多份車損樣本資料; 其中,每份車損樣本資料包括:拍攝條件相同且車損圖像標識相同的多行關於損傷情況的標註資料,每行標註資料包括:一個像素點的損傷情況統計資料,例如,凸凹程度、刮擦程度等等;其中,每份車損樣本資料還包括:針對每張車損圖像的整體車損統計資料、針對預設損傷區域的維修方案、針對預設損傷區域的損傷類型等,該針對預設損傷區域的維修方案可以是基於相關人員的標記資訊確定的。 其中,針對目標車輛的某一預設損傷區域採集到的基礎資料,即拍攝條件、車損圖像與物理屬性資訊之間的對應關係,如下表1所示: 表1

Figure 108110062-A0304-0001
具體的,標識為0001的拍攝條件與標識為0002的拍攝條件中的攝像裝置的拍攝方位、攝像裝置與目標車輛的相對位置、以及拍攝環境的光照參數中至少一項不同,標識為AAAA的車損圖像、標識為1aaaa的三維表面圖、標識為2aaaa的表面熱成像圖均為在標識為0001的拍攝條件下採集得到的。 其中,結合上述表1中車損圖像與物理屬性資訊之間的對應關係,根據物理屬性資訊,對目標車輛的某一預設損傷區域的車損圖像進行車損標註,產生的針對該預設損傷區域的標註資料,如下表2所示: 表2
Figure 108110062-A0304-0002
進一步的,在對實際拍攝得到的車損圖像進行自動損傷標註並產生車損樣本資料後,將該車損樣本資料輸入至預設機器學習模型,並對該機器學習模型進行訓練得到車損識別模型,其中,該機器學習模型可以是基於有監督學習模式的機器學習模型,具體的,如圖8所示,在S203根據獲取到的物理屬性資訊對上述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料之後,還包括: S204,將產生的車損樣本資料輸入至預設的基於有監督學習模式的機器學習模型; S205,利用機器學習方法並基於上述車損樣本資料對機器學習模型進行訓練,得到車損識別模型。 具體的,基於上述車損樣本資料對基於有監督學習模式的機器學習模型中的模型參數進行更新,得到模型參數更新後的車損識別模型,進而,在獲取到待識別的車損圖像後,利用該車損識別模型該待識別的車損圖像進行損傷情況識別,根據確定出的針對車損圖像的損傷情況,對車輛進行自動定損。 本說明書一個或多個實施例中的圖像標註方法,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 對應上述圖2至圖8描述的圖像標註方法,基於相同的技術構思,本說明書一個或多個實施例還提供了一種圖像標註裝置,圖9a為本說明書一個或多個實施例提供的圖像標註裝置的第一種模組組成示意圖,該裝置用於執行圖2至圖8描述的圖像標註方法,如圖9a所示,該裝置包括: 第一獲取模組901,用於獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 第二獲取模組902,用於獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 圖像標註模組903,用於根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例中,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 可選地,所述第二獲取模組902,具體用於: 獲取利用雷射光雷達裝置針對所述預設損傷區域進行掃描得到的三維深度資訊;和/或, 獲取利用紅外線熱成像裝置針對所述預設損傷區域進行掃描得到的表面熱成像資訊。 可選地,所述第一獲取模組901,具體用於: 獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,所述車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 對應的,所述第二獲取模組902,具體用於: 獲取針對所述預設損傷區域的物理屬性資訊集合,其中,所述物理屬性資訊集合包括:利用物理探測方式在不同拍攝條件下掃描得到的多個物理屬性資訊。 可選地,所述圖像標註模組903,具體用於: 根據所述物理屬性資訊,確定所述車損圖像中各像素點的損傷情況; 將所述各像素點的損傷情況和所述車損圖像確定為用於訓練車損識別模型的車損樣本資料。 可選地,所述目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像; 所述圖像標註模組903,進一步具體用於: 針對每張所述車損圖像,根據在該車損圖像對應的拍攝條件下得到的所述物理屬性資訊,確定該車損圖像中各像素點的損傷情況。 可選地,如圖9b所示,所述裝置還包括模型訓練模組904,用於: 在產生用於訓練車損識別模型的車損樣本資料之後,將所述車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於所述車損樣本資料對所述機器學習模型進行訓練,得到車損識別模型。 可選地,所述拍攝條件包括:所述攝像裝置的拍攝方位、所述攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種。 可選地,所述攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對所述目標車輛的移動進行控制得到的。 本說明書一個或多個實施例中的圖像標註裝置,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 需要說明的是,本說明書中關於圖像標註裝置的實施例與本說明書中關於圖像標註方法的實施例基於同一發明構思,因此該實施例的具體實施可以參見前述對應的圖像標註方法的實施,重複之處不再贅述。 對應上述圖2至圖8描述的圖像標註方法,基於相同的技術構思,本說明書一個或多個實施例還提供了一種圖像標註系統,圖10為本說明書一個或多個實施例提供的圖像標註系統的結構示意圖,如圖10所示,該系統包括: 攝像裝置10、物理探測裝置20和如圖9a和圖9b所示的圖像標註裝置30,其中,該攝像裝置10和物理探測裝置20均與圖像標註裝置30相連接; 上述攝像裝置10,用於對目標車輛上預設損傷區域進行拍攝得到的車損圖像,並將所述車損圖像傳輸至所述圖像標註裝置30; 上述物理探測裝置20,用於基於物理探測方式對所述預設損傷區域進行掃描得到的物理屬性資訊,並將所述物理屬性資訊傳輸至所述圖像標註裝置30; 上述圖像標註裝置30,用於接收車損圖像和圖像標註裝置,並根據該車損圖像和圖像標註裝置產生用於訓練車損識別模型的車損樣本資料。 其中,針對圖像標註裝置與模型訓練裝置設置於同一伺服器的情況,在產生用於訓練車損識別模型的車損樣本資料之後,還包括: 上述圖像標註裝置30,用於將產生的車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於上述車損樣本資料對上述機器學習模型進行訓練,得到車損識別模型。 具體的,基於上述車損樣本資料對基於有監督學習模式的機器學習模型中的模型參數進行更新,得到模型參數更新後的車損識別模型,進而,在獲取到待識別的車損圖像後,利用該車損識別模型該待識別的車損圖像進行損傷情況識別,根據確定出的針對車損圖像的損傷情況,對車輛進行自動定損。 本說明書一個或多個實施例中,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 其中,利用雷射光雷達探測技術得到的物理屬性資訊為各像素點的三維深度資訊,基於該三維深度資訊能夠對損傷表面的變形程度的評估,即能夠對預設損傷區域的凹陷、破損的位置進行高準確度識別,而利用紅外線探測技術得到的物理屬性資訊為各像素點的表面熱成像資訊,由於不同材質的紅外線熱成像存在一定差異,因此,結合探測到的表面熱成像資訊可以確定預設損傷區域的表面材質分佈,因而基於該表面熱成像資訊能夠對損傷表面的刮擦程度的評估,即能夠對預設損傷區域的刮擦損傷的範圍進行高準確度識別; 進一步的,考慮到不同物理探測技術對預設損傷區域的損傷評估側重點不同,因此,藉由不同物理探測技術相結合的方式來對預設損傷區域的損傷情況進行評估,能夠提高對損傷表面的損傷情況的評估準確度,上述物理探測裝置包括:雷射光雷達裝置、和/或紅外線熱成像裝置; 上述雷射光雷達裝置,用於利用雷射光光束對所述預設損傷區域進行掃描得到三維深度資訊,並將所述三維深度資訊傳輸至所述圖像標註裝置; 上述紅外線熱成像裝置,用於利用紅外線對所述預設損傷區域進行掃描得到表面熱成像資訊,並將所述表面熱成像資訊傳輸至所述圖像標註裝置。 進一步的,考慮到拍攝條件可能會影響得到的車損圖像和物理屬性資訊的質量,從而可能降低對預設損傷區域的損傷標註的準確度,因此,為了提高對預設損傷區域的損傷情況評估的精準度,在攝像裝置採集車損圖像以及物理探測裝置採集物理屬性資訊的過程中,基於預設調節規則對拍攝條件進行調整,實現在不同拍攝條件下獲取預設損傷區域的多張車損圖像和多個物理屬性資訊,因此,所述系統還包括:可調節雲台,其中,所述攝像裝置和所述物理探測裝置均設置於所述可調節雲台上,且所述攝像裝置與所述物理探測裝置的相對位置保持不變; 上述可調節雲台,用於調整所述攝像裝置和所述物理探測裝置的拍攝條件; 上述攝像裝置,用於在不同拍攝條件下對目標車輛上預設損傷區域進行拍攝得到多張車損圖像,並將所述多張車損圖像傳輸至所述圖像標註裝置; 上述物理探測裝置,用於利用物理探測方式在不同拍攝條件下對所述預設損傷區域的進行掃描得到多份物理屬性資訊,並將所述多份物理屬性資訊傳輸至所述圖像標註裝置。 其中,所述系統還包括:光照調節裝置; 上述光照調節裝置,用於調整所述攝像裝置所在的拍攝環境的光照參數。 其中,上述系統還包括:具備釐米級精確定位能力的定位裝置; 上述定位裝置,用於對所述攝像裝置與所述目標車輛的相對位置進行定位。 本說明書一個或多個實施例中的圖像標註系統,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 需要說明的是,本說明書中關於圖像標註系統的實施例與本說明書中關於圖像標註方法的實施例基於同一發明構思,因此該實施例的具體實施可以參見前述對應的圖像標註方法的實施,重複之處不再贅述。 進一步地,對應上述圖2至圖8所示的方法,基於相同的技術構思,本說明書一個或多個實施例還提供了一種圖像標註設備,該設備用於執行上述的圖像標註方法,如圖11所示。 圖像標註設備可因配置或性能不同而產生比較大的差異,可以包括一個或一個以上的處理器1101和記憶體1102,記憶體1102中可以儲存有一個或一個以上儲存應用程式或資料。其中,記憶體1102可以是短暫儲存或持久儲存。儲存在記憶體1102的應用程式可以包括一個或一個以上模組(圖示未示出),每個模組可以包括對圖像標註設備中的一系列電腦可執行指令。更進一步地,處理器1101可以設置為與記憶體1102通信,在圖像標註設備上執行記憶體1102中的一系列電腦可執行指令。圖像標註設備還可以包括一個或一個以上電源1103,一個或一個以上有線或無線網路介面1104,一個或一個以上輸入輸出介面1105,一個或一個以上鍵盤1106等。 在一個具體的實施例中,圖像標註設備包括有記憶體,以及一個或一個以上的程式,其中一個或者一個以上程式儲存於記憶體中,且一個或者一個以上程式可以包括一個或一個以上模組,且每個模組可以包括對圖像標註設備中的一系列電腦可執行指令,且經配置以由一個或者一個以上處理器執行該一個或者一個以上程式包含用於進行以下電腦可執行指令: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例中,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 可選地,電腦可執行指令在被執行時,所述獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取利用雷射光雷達裝置針對所述預設損傷區域進行掃描得到的三維深度資訊; 和/或, 獲取利用紅外線熱成像裝置針對所述預設損傷區域進行掃描得到的表面熱成像資訊。 可選地,電腦可執行指令在被執行時,所述獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像,包括: 獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,所述車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 對應的,所述獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取針對所述預設損傷區域的物理屬性資訊集合,其中,所述物理屬性資訊集合包括:利用物理探測方式在不同拍攝條件下掃描得到的多個物理屬性資訊。 可選地,電腦可執行指令在被執行時,所述根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,包括: 根據所述物理屬性資訊,確定所述車損圖像中各像素點的損傷情況; 將所述各像素點的損傷情況和所述車損圖像確定為用於訓練車損識別模型的車損樣本資料。 可選地,電腦可執行指令在被執行時,所述目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像; 所述根據所述物理屬性資訊,確定所述車損圖像中各像素點的損傷情況,包括: 針對每張所述車損圖像,根據在該車損圖像對應的拍攝條件下得到的所述物理屬性資訊,確定該車損圖像中各像素點的損傷情況。 可選地,電腦可執行指令在被執行時,在產生用於訓練車損識別模型的車損樣本資料之後,還包括: 將所述車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於所述車損樣本資料對所述機器學習模型進行訓練,得到車損識別模型。 可選地,電腦可執行指令在被執行時,所述拍攝條件包括:所述攝像裝置的拍攝方位、所述攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種。 可選地,電腦可執行指令在被執行時,所述攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對所述目標車輛的移動進行控制得到的。 本說明書一個或多個實施例中的圖像標註設備,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 進一步地,對應上述圖2至圖8所示的方法,基於相同的技術構思,本說明書一個或多個實施例還提供了一種儲存媒體,用於儲存電腦可執行指令,一種具體的實施例中,該儲存媒體可以為USB隨身碟、光碟、硬碟等,該儲存媒體儲存的電腦可執行指令在被處理器執行時,能實現以下流程: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊; 根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 本說明書一個或多個實施例中,藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取利用雷射光雷達裝置針對所述預設損傷區域進行掃描得到的三維深度資訊; 和/或, 獲取利用紅外線熱成像裝置針對所述預設損傷區域進行掃描得到的表面熱成像資訊。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像,包括: 獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,所述車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 對應的,所述獲取基於物理探測方式針對所述預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取針對所述預設損傷區域的物理屬性資訊集合,其中,所述物理屬性資訊集合包括:利用物理探測方式在不同拍攝條件下掃描得到的多個物理屬性資訊。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述根據所述物理屬性資訊對所述車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,包括: 根據所述物理屬性資訊,確定所述車損圖像中各像素點的損傷情況; 將所述各像素點的損傷情況和所述車損圖像確定為用於訓練車損識別模型的車損樣本資料。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像; 所述根據所述物理屬性資訊,確定所述車損圖像中各像素點的損傷情況,包括: 針對每張所述車損圖像,根據在該車損圖像對應的拍攝條件下得到的所述物理屬性資訊,確定該車損圖像中各像素點的損傷情況。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,在產生用於訓練車損識別模型的車損樣本資料之後,還包括: 將所述車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於所述車損樣本資料對所述機器學習模型進行訓練,得到車損識別模型。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述拍攝條件包括:所述攝像裝置的拍攝方位、所述攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種。 可選地,該儲存媒體儲存的電腦可執行指令在被處理器執行時,所述攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對所述目標車輛的移動進行控制得到的。 本說明書一個或多個實施例中的儲存媒體儲存的電腦可執行指令在被處理器執行時,獲取利用攝像裝置拍攝得到的預設損傷區域的車損圖像;以及,獲取基於物理探測方式針對預設損傷區域進行掃描得到的物理屬性資訊;根據該物理屬性資訊對車損圖像進行損傷標註,產生車損樣本資料。藉由將利用攝像裝置得到的視覺圖像和基於物理探測方式得到的物理屬性資訊相結合,自動產生訓練模型所需標註資料,無需人工手動對車損圖像進行損傷情況標註,還能夠實現對車損圖像進行像素級的損傷情況標註,提高了車損圖像的標註效率和準確度,從而能夠為基於深度學習進行模型訓練的過程提供大量、精準的標註樣本資料,以便訓練得到識別準確度更高的車損識別模型。 在20世紀90年代,對於一個技術的改進可以很明顯地區分是硬體上的改進(例如,對二極體、電晶體、開關等電路結構的改進)還是軟體上的改進(對於方法流程的改進)。然而,隨著技術的發展,當今的很多方法流程的改進已經可以視為硬體電路結構的直接改進。設計人員幾乎都藉由將改進的方法流程編程到硬體電路中來得到相應的硬體電路結構。因此,不能說一個方法流程的改進就不能用硬體實體模組來實現。例如,可程式邏輯裝置(Programmable Logic Device,PLD)(例如現場可程式閘陣列(Field Programmable Gate Array,FPGA))就是這樣一種積體電路,其邏輯功能由用戶對裝置編程來確定。由設計人員自行編程來把一個數位系統“集成”在一片PLD上,而不需要請晶片製造廠商來設計和製作專用的積體電路晶片。而且,如今,取代手工地製作積體電路晶片,這種編程也多半改用“邏輯編譯器(logic compiler)”軟體來實現,它與程式開發撰寫時所用的軟體編譯器相類似,而要編譯之前的原始碼也得用特定的編程語言來撰寫,此稱之為硬體描述語言(Hardware Description Language,HDL),而HDL也並非僅有一種,而是有許多種,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HD Cal、JHDL(Java Hardware Description Language)、Lava、Lola、My HDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)與Verilog。本領域技術人員也應該清楚,只需要將方法流程用上述幾種硬體描述語言稍作邏輯編程並編程到積體電路中,就可以很容易得到實現該邏輯方法流程的硬體電路。 控制器可以按任何適當的方式實現,例如,控制器可以採取例如微處理器或處理器以及儲存可由該(微)處理器執行的電腦可讀程式碼(例如軟體或韌體)的電腦可讀媒體、邏輯閘、開關、特定應用積體電路(Application Specific Integrated Circuit,ASIC)、可程式邏輯控制器和嵌入微控制器的形式,控制器的例子包括但不限於以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20 以及Silicone Labs C8051F320,記憶體控制器還可以被實現為記憶體的控制邏輯的一部分。本領域技術人員也知道,除了以純電腦可讀程式碼方式實現控制器以外,完全可以藉由將方法步驟進行邏輯編程來使得控制器以邏輯閘、開關、特定應用積體電路、可程式邏輯控制器和嵌入微控制器等的形式來實現相同功能。因此這種控制器可以被認為是一種硬體部件,而對其內包括的用於實現各種功能的裝置也可以視為硬體部件內的結構。或者甚至,可以將用於實現各種功能的裝置視為既可以是實現方法的軟體模組又可以是硬體部件內的結構。 上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦。具體的,電腦例如可以為個人電腦、筆記型電腦、行動電話、相機電話、智慧型電話、個人數位助理、媒體播放器、導航設備、電子郵件設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任何設備的組合。 為了描述的方便,描述以上裝置時以功能分為各種單元分別描述。當然,在實施本說明書一個或多個時可以把各單元的功能在同一個或多個軟體和/或硬體中實現。 本領域內的技術人員應明白,本說明書一個或多個的實施例可提供為方法、系統、或電腦程式產品。因此,本說明書一個或多個可採用完全硬體實施例、完全軟體實施例、或結合軟體和硬體方面的實施例的形式。而且,本說明書一個或多個可採用在一個或多個其中包含有電腦可用程式碼的電腦可用儲存媒體(包括但不限於磁碟記憶體、CD-ROM、光學記憶體等)上實施的電腦程式產品的形式。 本說明書一個或多個是參照根據本說明書一個或多個實施例的方法、設備(系統)、和電腦程式產品的流程圖和/或方塊圖來描述的。應理解可由電腦程式指令實現流程圖和/或方塊圖中的每一流程和/或方塊、以及流程圖和/或方塊圖中的流程和/或方塊的結合。可提供這些電腦程式指令到通用電腦、專用電腦、嵌入式處理機或其他可程式資料處理設備的處理器以產生一個機器,使得藉由電腦或其他可程式資料處理設備的處理器執行的指令產生用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的裝置。 這些電腦程式指令也可儲存在能引導電腦或其他可程式資料處理設備以特定方式工作的電腦可讀記憶體中,使得儲存在該電腦可讀記憶體中的指令產生包括指令裝置的製造品,該指令裝置實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能。 這些電腦程式指令也可裝載到電腦或其他可程式資料處理設備上,使得在電腦或其他可程式設備上執行一系列操作步驟以產生電腦實現的處理,從而在電腦或其他可程式設備上執行的指令提供用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的步驟。 在一個典型的配置中,計算設備包括一個或多個處理器(CPU)、輸入/輸出介面、網路介面和記憶體。 記憶體可能包括電腦可讀媒體中的非永久性記憶體,隨機存取記憶體(RAM)和/或非易失性記憶體等形式,如唯讀記憶體(ROM)或快閃記憶體(flash RAM)。記憶體是電腦可讀媒體的示例。 電腦可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可抹除可程式化唯讀記憶體(EEPROM)、快閃記憶體或其他記憶體技術、唯讀光碟(CD-ROM)、數位化多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁磁碟儲存或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被計算設備存取的資訊。按照本文中的界定,電腦可讀媒體不包括暫存電腦可讀媒體(transitory media),如調變的資料信號和載波。 還需要說明的是,術語“包括”、“包含”或者其任何其他變體意在涵蓋非排他性的包含,從而使得包括一系列要素的過程、方法、商品或者設備不僅包括那些要素,而且還包括沒有明確列出的其他要素,或者是還包括為這種過程、方法、商品或者設備所固有的要素。在沒有更多限制的情況下,由語句“包括一個……”限定的要素,並不排除在包括所述要素的過程、方法、商品或者設備中還存在另外的相同要素。 本領域技術人員應明白,本說明書一個或多個的實施例可提供為方法、系統或電腦程式產品。因此,本說明書一個或多個可採用完全硬體實施例、完全軟體實施例或結合軟體和硬體方面的實施例的形式。而且,本說明書一個或多個可採用在一個或多個其中包含有電腦可用程式碼的電腦可用儲存媒體(包括但不限於磁碟記憶體、CD-ROM、光學記憶體等)上實施的電腦程式產品的形式。 本說明書一個或多個可以在由電腦執行的電腦可執行指令的一般上下文中描述,例如程式模組。一般地,程式模組包括執行特定任務或實現特定抽象資料類型的例程、程式、物件、組件、資料結構等等。也可以在分散式計算環境中實踐本說明書一個或多個,在這些分散式計算環境中,由藉由通信網路而被連接的遠端處理設備來執行任務。在分散式計算環境中,程式模組可以位於包括儲存設備在內的本地和遠端電腦儲存媒體中。 本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於系統實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。 以上所述僅為本說明書一個或多個的實施例而已,並不用於限制本說明書一個或多個。對於本領域技術人員來說,本說明書一個或多個可以有各種更改和變化。凡在本說明書一個或多個的精神和原理之內所作的任何修改、等同替換、改進等,均應包含在本說明書一個或多個的申請專利範圍之內。In order to enable those skilled in the art to better understand the technical solutions in one or more of this specification, the technical solutions in one or more embodiments of this specification will be described below in conjunction with the drawings in one or more embodiments of this specification A clear and complete description is made. Obviously, the described embodiments are only one or more embodiments of this specification, not all of them. Based on one or more embodiments in this specification, all other embodiments obtained by a person with ordinary knowledge in the art without creative work shall fall within the scope of one or more protections in this specification. One or more embodiments of this specification provide an image annotation method, device, and system. By combining a visual image obtained by a camera device and physical attribute information obtained based on a physical detection method, a training model is automatically generated. Annotate data, no need to manually mark the damage of the car damage image, and also achieve pixel-level damage labeling of the car damage image, improve the efficiency and accuracy of the car damage image labeling, which can be based on depth The process of learning to carry out model training provides a large amount of accurate labeled sample data in order to train to obtain a vehicle damage recognition model with higher recognition accuracy. FIG. 1 is a schematic diagram of an application scenario of an image tagging method provided by one or more embodiments of the present specification. As shown in FIG. 1, the system includes: a camera device, a physical detection device, and an image tagging device, where the camera device and The physical detection devices are all connected to the image tagging device; the camera device may be a digital camera or other device with a photographing function. The physical detection device may be a laser light radar device, an infrared thermal imaging device, etc. The image tagging device may It is a background server used for annotating car damage images. The specific process of image annotation is as follows: (1) Drive the target vehicle with the preset damage area into the designated position of the workbench in the work site. Specifically, The operator initially sets the data collection range according to the damaged part of the vehicle, that is, sets the initial shooting range of the camera device, and sets the initial detection range of the physical detection device, for example, the front bumper of the vehicle is damaged, that is, the preset damage area If it is the front hemisphere of the vehicle, the shooting range and physical detection range are set to the front hemisphere of the vehicle; (2) Take a picture of the predetermined damage area on the target vehicle by the camera device to obtain a car damage image, and transmit the car damage image To the image tagging device, wherein the camera device can be installed on the adjustable gimbal, and the shooting conditions of the camera device can be adjusted by the adjustable gimbal; (3) The above-mentioned preset damage area is based on the physical detection method by the physical detection device Scan to obtain the physical attribute information, and transmit the physical attribute information to the image annotation device. Among them, the physical detection device is also set on the adjustable gimbal, and the relative position of the physical detection device and the camera device remains unchanged. Acquire the car damage image and the corresponding physical attribute information at the same time under the same shooting conditions; (4) The image annotation device tags the car damage image according to the acquired physical attribute information, and generates a car for training the car damage recognition model Damage the sample data, so there is no need to manually label the damage of the car damage image, and it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image. The process of model training based on deep learning provides a large amount of accurate labeled sample data in order to train to obtain a car damage recognition model with higher recognition accuracy. FIG. 2 is a first schematic flowchart of an image tagging method provided by one or more embodiments of the present specification. The method in FIG. 2 can be executed by the image tagging apparatus in FIG. 1. As shown in FIG. 2, the method at least The method includes the following steps: S201: Acquire a car damage image of a preset damage area on a target vehicle captured by a camera device; specifically, the camera device photographs the preset damage area on the target vehicle to obtain A visual image, and transmit the visual image to the image annotation device; S202, acquire physical attribute information obtained by scanning the above-mentioned preset damage area based on a physical detection method; specifically, the physical detection device presets the target vehicle Scanning the damaged area to obtain physical attribute information for the preset damaged area, and transmitting the physical attribute information to the image tagging device; wherein the physical detection device may be a laser radar device, corresponding to the physical detection method It can be a laser radar detection method; the physical detection device can also be an infrared thermal imaging device; correspondingly, the physical detection method can also be an infrared detection method; the physical detection device can also be a laser radar device and an infrared thermal imaging device The corresponding physical detection method can also be a combination of laser radar detection method and infrared detection method; in addition, the physical detection device can also be a device that uses other physical detection methods to scan and collect physical attribute information; S203, according to the acquisition The physical attribute information is used to mark the damage of the above-mentioned car damage image, and generate car damage sample data for training the car damage recognition model, where the car damage sample data may include: a car damage image and a car damage image The labeling data can include: the damage of each pixel in the car damage image; specifically, for each car damage image, obtain the physical attribute information corresponding to the car damage image, according to the physical attribute Information, determine the damage of each pixel in the car damage image, and use the determined damage of each pixel in the car damage image as the annotation data for the car damage image; where the damage of the pixel can be It is the data that characterizes whether the pixel is damaged, or the data that characterizes the damage of the pixel, or the data that characterizes the degree of damage to the pixel. In one or more embodiments of this specification, by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, and there is no need to manually manually map the car damage map. Like the damage labeling, it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image, which can provide a large amount of precision for the process of model training based on deep learning. Annotate sample data in order to train a car damage recognition model with higher recognition accuracy. Among them, the physical property information obtained by the laser radar detection technology is the three-dimensional depth information of each pixel. Based on the three-dimensional depth information, the degree of deformation of the damaged surface can be evaluated, that is, the concave and damaged positions of the preset damage area can be evaluated. Recognize with high accuracy, and the physical property information obtained by infrared detection technology is the surface thermal imaging information of each pixel. Due to the difference in infrared thermal imaging of different materials, the combination of the detected surface thermal imaging information can determine the The surface material distribution of the damaged area is set, so the scratch degree of the damaged surface can be evaluated based on the thermal imaging information of the surface, that is, the range of the scratch damage of the preset damage area can be identified with high accuracy; further, considering Different physical detection techniques have different emphasis on the damage assessment of the preset damage area. Therefore, the combination of different physical detection techniques to evaluate the damage of the preset damage area can improve the assessment of the damage of the damaged surface. Accuracy, based on this, as shown in FIG. 3, taking the method of combining laser radar detection technology and infrared detection technology as an example, the above S202 obtains physical attribute information obtained by scanning the above-mentioned predetermined damage area based on the physical detection method, Specifically, it includes: S2021, acquiring three-dimensional depth information obtained by scanning the preset damage area on the target vehicle using the laser radar device; specifically, the laser light radar device scans the preset damage area on the target vehicle to obtain the preset 3D depth information of the damaged area, and transmit the 3D depth information to the image annotation device; as shown in FIG. 4a, the laser radar device includes: a first processing unit, a laser light emitting unit and a laser light receiving unit; specifically, The laser light emitting unit is used to emit a laser beam (that is, a detection signal) to the preset damage area. The laser beam will reflect when it reaches the preset damage area, and the laser light receiving unit receives the reflected beam (that is, the target) returned by the preset damage area Echo), and transmit the target echo to the first processing unit, the first processing unit compares the received target echo reflected from the preset damage area with the detection signal transmitted to the preset damage area, Generate three-dimensional depth information for the preset damage area, based on the three-dimensional depth information, a three-dimensional surface map can be drawn to characterize the depth information of each pixel in the car damage image, so as to detect the preset damage on the target vehicle by emitting a laser beam The relative position information of each point in the damaged surface of the area; S2022, acquiring the surface thermal imaging information obtained by scanning the preset damage area on the target vehicle using the infrared thermal imaging device; specifically, the infrared thermal imaging device presets the target vehicle Scanning the damaged area to obtain the surface thermal imaging information for the preset damage area and transmitting the surface thermal imaging information to the image annotation device; as shown in FIG. 4b, the infrared thermal imaging device includes: a second processing unit, infrared An emitting unit and an infrared receiving unit; specifically, the infrared emitting unit is used to emit an infrared beam (ie, a detection signal) to a preset damaged area, When the infrared beam reaches the preset damage area, it will be reflected. The infrared receiving unit receives the reflected beam (that is, the target echo) returned from the preset damage area, and transmits the target echo to the first processing unit, and the second processing unit will receive The target echo reflected back from the preset damage area is compared with the detection signal emitted to the preset damage area to generate surface thermal imaging information for the preset damage area, which can be drawn for characterization based on the surface thermal imaging information Surface thermal imaging of the radiant energy of each pixel in the car damage image to realize the material distribution of the damaged surface of the preset damage area on the target vehicle by emitting infrared beams; correspondingly, for the simultaneous use of laser radar detection technology and infrared The detection technology performs damage assessment on the preset damage area. The above S203 performs damage labeling on the above-mentioned car damage image according to the acquired physical attribute information, and generates car damage sample data for training the car damage recognition model, which specifically includes: S2031 , Based on the acquired three-dimensional depth information and surface thermal imaging information, mark the damage of the above-mentioned car damage image, and generate car damage sample data for training the car damage recognition model; specifically, based on the obtained three-dimensional depth information, determine Depth information of each pixel in the car damage image; then for each pixel, according to the depth information of the pixel, determine the unevenness (ie, deformation) of the pixel; based on the obtained surface thermal imaging information, determine The radiant energy corresponding to each pixel in the car damage image; for each pixel, according to the corresponding radiant energy of the pixel, determine the scratch condition of the pixel (ie, the paint drop); the determined pixels The bumps, scratches, and car damage images are determined as car damage sample data for training car damage recognition models; in one or more embodiments of this specification, the combination of laser radar detection technology and infrared detection technology The physical detection dimension realizes the comprehensive identification of the deformation degree and the scratch degree of the preset damage area at the same time, which improves the accuracy of the damage labeling of the car damage image, which is beneficial to improve the recognition based on the training of the car damage image after labeling The recognition accuracy of the model. Further, considering that the shooting conditions may affect the quality of the obtained car damage image and physical attribute information, which may reduce the accuracy of the damage labeling of the preset damage area, therefore, in order to improve the damage situation of the preset damage area The accuracy of the evaluation, in the process of the camera device collecting the car damage image and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules, so that multiple pictures of the preset damage area can be obtained under different shooting conditions Vehicle damage image and multiple physical attribute information, based on this, as shown in FIG. 5, the above S201 obtains the vehicle damage image of the preset damage area on the target vehicle captured by the camera device, which specifically includes: S2011, obtains the target A set of vehicle damage images of a preset damage area on the vehicle, wherein the set of vehicle damage images includes: a plurality of vehicle damage images obtained by using a camera device under different shooting conditions; wherein the above shooting conditions include: a camera device At least one of the shooting orientation, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other on-site environmental factors that affect the visual characteristics of the damaged area. The shooting orientation may include: the shooting angle and the shooting orientation. The lighting parameters may Including: the number of light sources and lighting conditions, so the different shooting conditions can be at least one of the shooting conditions of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment are different, each shooting The condition corresponds to a car damage image for the preset damage area; that is to say, the obtained car damage images are all actually taken, not obtained by performing different image processing based on the original image, where , The feature distribution of the actual captured image is more in line with the real scene, so that it has better training effect for deep learning; where the above-mentioned preset adjustment rules can be based on the step size input by the operator, the automatic shooting equipment to illuminate The setting information of the change of distance, distance and angle is determined. For the same damaged area, a group of hundreds to thousands of car damage pictures are automatically taken and marked. For example, for the same damaged area, move about 30cm each time , The angle is changed by 10 degrees, the illumination is increased from 500 lumens to 3000 lumens, and each time the lumens are increased by 100 lumens to adjust the shooting conditions and collect the car damage images and physical attribute information under each shooting condition; further, the above preset adjustments The rules can be determined based on the recognition accuracy of the car damage recognition model. When the recognition accuracy of the car damage recognition model is not ideal, the shooting conditions can be optimized by modifying the preset adjustment rules, thereby optimizing the final sample data of the car damage. To further improve the recognition accuracy of the vehicle damage recognition model. Specifically, since the camera device is installed on the adjustable gimbal in FIG. 1, the adjustable gimbal is adjusted by the control device to adjust the shooting orientation of the camera device by the adjustable gimbal, and if it is adjustable A wheeled or crawler-type walking mechanism is installed under the gimbal, which can be controlled by the control device. The adjustable gimbal can move forward, backward, left, and right in the work site according to the control instructions, and the relative position of the camera device and the target vehicle can be adjusted by the adjustable gimbal. ; Where the control device can be a separate control device or a control module installed in the image tagging device; In addition, for the above shooting conditions including the lighting parameters of the shooting environment, it needs to be collected by the camera device Adjusting the lighting parameters of the shooting environment during the car damage image of the preset damage area and the physical property information collected by the physical detection device for the preset damage area, as shown in Figure 6a, the image annotation is given A schematic diagram of the second application scenario of the method, specifically: setting a lighting adjustment device at a specified position on the work site, the control device controls the light intensity emitted by the lighting adjustment device according to preset shooting parameters to adjust the lighting of the shooting environment where the camera device is located Parameters, which in turn enables the camera device to capture corresponding car damage images under different lighting parameters; correspondingly, for each car damage image, it is necessary to collect physical attribute information corresponding to the car damage image, for the physical attribute information In the acquisition process, physical detection methods are still included: laser radar detection method and infrared detection method as an example. The above S2021 obtains three-dimensional depth information obtained by scanning a predetermined damage area on a target vehicle using a laser radar device, specifically including: S20211 To obtain a set of three-dimensional depth information for the preset damage area on the target vehicle, where the three-dimensional depth information set includes: multiple three-dimensional surface maps scanned under different shooting conditions using laser radar detection methods; specifically, for each Shooting conditions, not only the car damage image of the preset damage area is collected by the camera device, but also the three-dimensional depth information (ie the three-dimensional surface map) of the preset damage area is collected by the laser radar device. Therefore, the three-dimensional depth information Each three-dimensional surface map in the set is obtained under a specific shooting condition; correspondingly, the above S2022 obtains surface thermal imaging information obtained by scanning the preset damage area on the target vehicle using the infrared thermal imaging device, and specifically includes: S20221 To obtain a set of surface thermal imaging information for a predetermined damage area on the target vehicle, where the set of surface thermal imaging information includes: multiple surface thermal imaging images scanned under different shooting conditions using infrared detection methods; specifically, for For each shooting condition, not only the car damage image of the preset damage area is collected by the camera device and the three-dimensional depth information (ie, the three-dimensional surface map) of the preset damage area is collected by the laser radar device, but also by infrared heat The imaging device collects the surface thermal imaging information (that is, the surface thermal imaging map) of the preset damage area. Therefore, each surface thermal imaging map in the surface thermal imaging information set is Obtained under a specific shooting condition; Correspondingly, the above S203 adds damage annotation to the above-mentioned car damage image according to the acquired three-dimensional depth information and surface thermal imaging information, and generates car damage sample data for training the car damage recognition model , Specifically including: S20311, according to the acquired three-dimensional depth information set and surface thermal imaging information set, respectively mark the damage of the car damage image under each shooting condition, and generate car damage sample data for training the car damage recognition model; In other words, for a predetermined damage area, under a certain shooting condition, obtain the car damage image, three-dimensional surface map and surface thermal imaging map corresponding to the shooting condition, and establish the shooting conditions, car damage image, three-dimensional Correspondence between the surface map and the surface thermal imaging map, and then determine the vehicle damage for the preset damage area according to the vehicle damage image, three-dimensional surface map and surface thermal imaging map corresponding to each shooting condition acquired for the preset damage area Sample information. Among them, in order to ensure that the physical attribute information obtained by the physical detection device matches the pixel points on the car damage image obtained by the camera device one by one, according to the relative position of the camera device and the physical detection device, and the camera device The shooting range of each pixel in the viewing window determines the scanning range of the physical detection device; further in order to ensure that the car damage image and physical attribute information under the same shooting conditions correspond one-to-one, the relative position of the physical detection device and the camera device is not The control device adjusts the camera device and the physical detection device synchronously through the adjustable gimbal. Therefore, under the same shooting conditions, the camera device collects the car damage image for the preset damage area at the same time, and the physical The detection device collects the physical attribute information for the preset damage area; in addition, since the coordinates of the shooting point relative to the center point of the target vehicle and the damage location are known, based on spatial geometric calculations, it can be known that in the captured car damage image, each Whether a pixel is within a certain damage range. In one or more embodiments of this specification, in the process of the camera device collecting the car damage image and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules, so that the presets can be obtained under different shooting conditions Multiple car damage images and multiple physical attribute information in the damaged area, for each shooting condition, according to the physical attribute information collected under the shooting conditions, the damage images of the car damage images captured under the shooting conditions are marked and generated Vehicle damage sample data, so that for a preset damage area, the vehicle damage sample data generated under multiple shooting conditions can be obtained, which can improve the accuracy of the damage labeling of the preset damage area, and further improve the The recognition accuracy of the vehicle damage identification model trained by the vehicle damage sample data. Among them, in order to ensure the accuracy of the movement of the target vehicle and improve the positioning accuracy of the relative position of the camera device and the target vehicle, the relative position of the camera device and the target vehicle is based on the movement of the positioning device with a centimeter-level precise positioning capability to the target vehicle Obtained by the control, as shown in FIG. 6b, a schematic diagram of the third application scenario of the image annotation method is given, specifically: a wireless positioning device is set at a specified position on the work site, and the wireless positioning device obtains the Based on the first position information and the second position information after movement, determine the actual moving distance of the target vehicle according to the first position information and the second position information, compare the actual moving distance with the theoretical moving distance, and determine the moving error of the target vehicle Whether the preset conditions are met, if not, send corresponding prompt information to the control device to enable the control device to accurately locate the target vehicle, wherein the wireless positioning device may be a positioning device based on a radio signal or a Bluetooth signal The positioning device can also be a positioning device based on laser radar. Among them, for the damage labeling process of the car damage image, as shown in FIG. 7, the above S203 performs damage labeling on the car damage image according to the acquired physical attribute information, and generates car damage sample data for training the car damage recognition model , Including: S2032, according to the acquired physical attribute information, determine the damage situation of each pixel in the car damage image of the preset damage area; specifically, for the case where the physical attribute information is three-dimensional depth information, according to the obtained 3D surface map for the preset damage area, determine the depth information of each pixel in the car damage image for the preset damage area; based on the depth information of each pixel, determine the deformation of each pixel in the car damage image The situation (ie, the concave-convex situation); For the case where the physical attribute information is the surface thermal imaging information, according to the acquired surface thermal imaging map for the preset damage area, determine each pixel in the car damage image for the preset damage area The radiant energy of each pixel; according to the radiant energy of each pixel, determine the scratch condition of each pixel in the car damage image (ie, paint drop); S2033, determine the determined damage of each pixel and the car damage image It is the sample data of the car damage used to train the car damage recognition model; specifically, the damage of each pixel in the car damage image is used as the annotation data for the car damage image, and the relationship between the car damage image and the annotation data is established. Correspondence, input the correspondence, car damage image, and annotation data to the machine learning model based on supervised learning mode to be trained. Among them, for the case of obtaining the car damage image and the physical attribute information corresponding to each car damage image under different shooting conditions, that is, the car damage image of the preset damage area on the target vehicle includes: shooting under different shooting conditions Multiple car damage images and the above physical attribute information for the preset damage area include: multiple pieces of physical attribute information scanned under different shooting conditions; correspondingly, the above S2032 determines the target based on the acquired physical attribute information The damage situation of each pixel in the car damage image of the preset damage area includes: For each car damage image, determine the car damage according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image The damage of each pixel in the image. Specifically, in order to improve the accuracy of the assessment of the damage situation of the preset damage area, in the process of the camera device collecting the car damage image and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules to achieve Acquire multiple car damage images and multiple physical attribute information of the preset damage area under different shooting conditions. Therefore, when labeling the car damage image, it is necessary to determine for each car damage image The physical attribute information obtained under the shooting conditions corresponding to the damaged image; then based on the physical attribute information, the damage of each pixel in the car damage image is determined; wherein, according to the physical attribute information, each The process of pixel damage is as follows: (1) For the case where the physical attribute information is three-dimensional depth information, determine the car damage image based on the three-dimensional surface map obtained under the shooting conditions corresponding to the car damage image Depth information of each pixel; based on the depth information of each pixel, determine the deformation of each pixel in the car damage image (that is, the unevenness); (2) For the case where the physical attribute information is surface thermal imaging information, according to The surface thermography obtained under the shooting conditions corresponding to the car damage image determines the radiant energy of each pixel in the car damage image; according to the radiant energy of each pixel, determines the scratch of each pixel in the car damage image Rubbing situation (ie, paint dropping); specifically, after determining the damage situation of each pixel in each car damage image, each car damage image under a certain shooting condition and each of the car damage images The damage of the pixels is taken as a piece of vehicle damage sample data. Therefore, for the same preset damage area, multiple pieces of vehicle damage sample data under different shooting conditions are obtained; where each piece of vehicle damage sample data includes: the same shooting conditions and The car damage image identifies the same multiple lines of annotated data about the damage. Each line of annotated data includes: statistical information on the damage of a pixel, such as the degree of bumps, scratches, etc.; where each sample of car damage Also includes: overall car damage statistics for each car damage image, repair plan for the preset damage area, damage type for the preset damage area, etc. The repair plan for the preset damage area may be based on relevant personnel The tag information is confirmed. Among them, the basic data collected for a predetermined damage area of the target vehicle, that is, the corresponding relationship between the shooting conditions, the car damage image and the physical attribute information, as shown in Table 1 below: Table 1
Figure 108110062-A0304-0001
Specifically, at least one of the shooting conditions marked 0001 and the shooting conditions marked 0002 differs in at least one of the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment, and the car marked AAAA The damaged image, the three-dimensional surface map labeled 1aaaa, and the surface thermal imaging map labeled 2aaaa are all acquired under the shooting conditions labeled 0001. Among them, based on the corresponding relationship between the car damage image and the physical attribute information in Table 1 above, according to the physical attribute information, the car damage image of a predetermined damage area of the target vehicle is marked for car damage. The labeling information of the preset damage area is shown in Table 2 below: Table 2
Figure 108110062-A0304-0002
Further, after automatic damage annotation is performed on the actually captured car damage image and car damage sample data is generated, the car damage sample data is input to a preset machine learning model, and the machine learning model is trained to obtain car damage Recognition model, where the machine learning model may be a machine learning model based on supervised learning mode. Specifically, as shown in FIG. 8, at S203, the above-mentioned car damage image is damaged and labeled according to the acquired physical attribute information to generate After the vehicle damage sample data used to train the vehicle damage identification model, it also includes: S204, input the generated vehicle damage sample data to a preset machine learning model based on the supervised learning mode; S205, use the machine learning method based on the above Car damage sample data trains the machine learning model to obtain a car damage recognition model. Specifically, the model parameters in the machine learning model based on the supervised learning model are updated based on the above vehicle damage sample data to obtain a vehicle damage recognition model after the model parameters are updated, and then, after the vehicle damage image to be recognized is obtained Use the vehicle damage identification model to identify the damage image of the vehicle damage image to be recognized, and automatically determine the damage to the vehicle according to the determined damage situation for the vehicle damage image. The image labeling method in one or more embodiments of this specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attributes obtained by scanning the preset damage area based on a physical detection method Information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model. Corresponding to the image labeling methods described in FIGS. 2 to 8 above, based on the same technical concept, one or more embodiments of this specification also provide an image labeling device, and FIG. 9a provides one or more embodiments of this specification. A schematic diagram of the composition of the first module of the image tagging device. The device is used to perform the image tagging methods described in FIGS. 2 to 8. As shown in FIG. 9a, the device includes: a first acquiring module 901, which is used to acquire A vehicle damage image of a preset damage area on the target vehicle captured by a camera device; and, a second acquisition module 902, configured to acquire physical attribute information obtained by scanning the preset damage area based on a physical detection method; The image tagging module 903 is used for tagging the car damage image according to the physical attribute information, and generating car damage sample data for training a car damage recognition model. In one or more embodiments of this specification, by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, and there is no need to manually manually map the car damage map. Like the damage labeling, it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image, which can provide a large amount of precision for the process of model training based on deep learning. Annotate sample data in order to train a car damage recognition model with higher recognition accuracy. Optionally, the second acquisition module 902 is specifically configured to: acquire the three-dimensional depth information obtained by scanning the preset damage area using a laser radar device; and/or acquire the infrared thermal imaging device for the target The surface thermal imaging information obtained by scanning the preset damage area. Optionally, the first acquiring module 901 is specifically configured to: acquire a set of car damage images for a preset damage area on the target vehicle, wherein the set of car damage images includes: using a camera device to shoot in different Multiple vehicle damage images captured under conditions; correspondingly, the second acquisition module 902 is specifically configured to: acquire a physical attribute information set for the preset damage area, wherein the physical attribute information set Including: using physical detection to scan multiple physical attribute information under different shooting conditions. Optionally, the image annotation module 903 is specifically configured to: determine the damage of each pixel in the car damage image based on the physical attribute information; The vehicle damage image is determined as the vehicle damage sample data for training the vehicle damage recognition model. Optionally, the vehicle damage image of the preset damage area on the target vehicle includes: multiple vehicle damage images captured under different shooting conditions; the image annotation module 903 is further specifically used for: Each of the car damage images determines the damage of each pixel in the car damage image according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image. Optionally, as shown in FIG. 9b, the device further includes a model training module 904 for: after generating vehicle damage sample data for training a vehicle damage identification model, input the vehicle damage sample data to the A machine learning model with supervised learning mode; using a machine learning method and training the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model. Optionally, the shooting conditions include: at least one of the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other on-site environmental factors that affect the visual characteristics of the damaged area. Optionally, the relative position of the camera device and the target vehicle is obtained by controlling the movement of the target vehicle based on a positioning device with centimeter-level precise positioning capability. The image tagging device in one or more embodiments of this specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attributes obtained by scanning the preset damage area based on a physical detection method Information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model. It should be noted that the embodiment of the image labeling device in this specification and the embodiment of the image labeling method in this specification are based on the same inventive concept, so for the specific implementation of this embodiment, please refer to the corresponding image labeling method Implementation, the repetition is not repeated here. Corresponding to the image tagging methods described in FIGS. 2 to 8 above, based on the same technical concept, one or more embodiments of this specification also provide an image tagging system, and FIG. 10 provides one or more examples of this specification. A schematic structural diagram of an image annotation system, as shown in FIG. 10, the system includes: a camera device 10, a physical detection device 20, and an image annotation device 30 as shown in FIGS. 9a and 9b, wherein the camera device 10 and the physical The detection devices 20 are all connected to the image tagging device 30; the above-mentioned camera device 10 is used to shoot a car damage image obtained from a preset damage area on the target vehicle and transmit the car damage image to the map Image tagging device 30; the above-mentioned physical detection device 20 is used for physical attribute information obtained by scanning the preset damage area based on a physical detection method, and transmitting the physical attribute information to the image tagging device 30; The image tagging device 30 is used to receive a car damage image and an image tagging device, and generate car damage sample data for training a car damage recognition model according to the car damage image and image tagging device. Among them, for the case where the image tagging device and the model training device are installed on the same server, after generating the car damage sample data for training the car damage recognition model, the following image tagging device 30 is used to convert the generated The vehicle damage sample data is input into a machine learning model based on a supervised learning model; the machine learning method is used to train the above machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model. Specifically, the model parameters in the machine learning model based on the supervised learning model are updated based on the above vehicle damage sample data to obtain a vehicle damage recognition model after the model parameters are updated, and then, after the vehicle damage image to be recognized is obtained Use the vehicle damage identification model to identify the damage image of the vehicle damage image to be recognized, and automatically determine the damage to the vehicle according to the determined damage situation for the vehicle damage image. In one or more embodiments of this specification, by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, and there is no need to manually manually map the car damage map. Like the damage labeling, it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image, which can provide a large amount of precision for the process of model training based on deep learning. Annotate sample data in order to train a car damage recognition model with higher recognition accuracy. Among them, the physical property information obtained by the laser radar detection technology is the three-dimensional depth information of each pixel. Based on the three-dimensional depth information, the degree of deformation of the damaged surface can be evaluated, that is, the concave and damaged positions of the preset damage area can be evaluated. Recognize with high accuracy, and the physical property information obtained by infrared detection technology is the surface thermal imaging information of each pixel. Due to the difference in infrared thermal imaging of different materials, the combination of the detected surface thermal imaging information can determine the The surface material distribution of the damaged area is set, so the scratch degree of the damaged surface can be evaluated based on the thermal imaging information of the surface, that is, the range of the scratch damage of the preset damage area can be identified with high accuracy; further, considering Different physical detection techniques have different emphasis on the damage assessment of the preset damage area. Therefore, the combination of different physical detection techniques to evaluate the damage of the preset damage area can improve the assessment of the damage of the damaged surface. For accuracy, the above-mentioned physical detection device includes: a laser light radar device and/or an infrared thermal imaging device; the above-mentioned laser light radar device is used to scan the preset damage area with a laser beam to obtain three-dimensional depth information, and The three-dimensional depth information is transmitted to the image tagging device; the infrared thermal imaging device is used to scan the preset damage area by infrared to obtain surface thermal imaging information, and transmit the surface thermal imaging information to the The image tagging device is described. Further, considering that the shooting conditions may affect the quality of the obtained car damage image and physical attribute information, which may reduce the accuracy of the damage labeling of the preset damage area, therefore, in order to improve the damage situation of the preset damage area The accuracy of the evaluation, in the process of the camera device collecting the car damage image and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules, so that multiple pictures of the preset damage area can be obtained under different shooting conditions Vehicle damage images and multiple physical attribute information, therefore, the system further includes: an adjustable gimbal, wherein the camera device and the physical detection device are both provided on the adjustable gimbal, and the The relative position of the camera device and the physical detection device remains unchanged; the above-mentioned adjustable gimbal is used to adjust the shooting conditions of the camera device and the physical detection device; the above-mentioned camera device is used to compare the shooting conditions under different shooting conditions Shooting a predetermined damage area on the target vehicle to obtain multiple car damage images, and transmitting the multiple car damage images to the image tagging device; the above-mentioned physical detection device is used to capture different photos using physical detection methods Scanning the preset damaged area under conditions obtains multiple pieces of physical attribute information, and transmits the multiple pieces of physical attribute information to the image tagging device. Wherein, the system further includes: a light adjustment device; the above light adjustment device is used to adjust the light parameters of the shooting environment where the camera device is located. Wherein, the above system further includes: a positioning device with centimeter-level precise positioning capability; the above positioning device is used for positioning the relative position of the camera device and the target vehicle. The image tagging system in one or more embodiments of this specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attributes obtained by scanning the preset damage area based on a physical detection method Information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model. It should be noted that the embodiment of the image annotation system in this specification and the embodiment of the image annotation method in this specification are based on the same inventive concept, so for the specific implementation of this embodiment, please refer to the corresponding image annotation method Implementation, the repetition is not repeated here. Further, corresponding to the above methods shown in FIGS. 2 to 8, based on the same technical concept, one or more embodiments of this specification also provide an image tagging device, which is used to perform the above image tagging method, As shown in Figure 11. Image tagging devices may have relatively large differences due to different configurations or performances, and may include one or more processors 1101 and memory 1102, and one or more storage applications or data may be stored in the memory 1102. Among them, the memory 1102 may be short-term storage or permanent storage. The application program stored in the memory 1102 may include one or more modules (not shown), and each module may include a series of computer-executable instructions in the image annotation device. Furthermore, the processor 1101 may be configured to communicate with the memory 1102 and execute a series of computer-executable instructions in the memory 1102 on the image tagging device. The image tagging device may further include one or more power supplies 1103, one or more wired or wireless network interfaces 1104, one or more input/output interfaces 1105, one or more keyboards 1106, and so on. In a specific embodiment, the image annotation device includes a memory, and one or more programs, one or more programs are stored in the memory, and the one or more programs may include one or more modes Group, and each module may include a series of computer-executable instructions in the image annotation device, and is configured to be executed by one or more processors. The one or more programs include the following computer-executable instructions : Obtaining the car damage image of the preset damage area on the target vehicle captured by the camera device; and, obtaining the physical attribute information obtained by scanning the preset damage area based on the physical detection method; according to the physical attribute information The vehicle damage image is labeled with damage to generate vehicle damage sample data for training a vehicle damage recognition model. In one or more embodiments of this specification, by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, and there is no need to manually manually map the car damage map. Like the damage labeling, it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image, which can provide a large amount of precision for the process of model training based on deep learning. Annotate sample data in order to train a car damage recognition model with higher recognition accuracy. Optionally, when the computer-executable instructions are executed, the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes: acquiring the preset damage area using a laser radar device Performing three-dimensional depth information obtained by scanning; and/or obtaining surface thermal imaging information obtained by scanning the preset damage area using an infrared thermal imaging device. Optionally, when the computer-executable instruction is executed, the acquiring the vehicle damage image of the preset damage area on the target vehicle captured by the camera device includes: acquiring a vehicle damage map for the preset damage area on the target vehicle Image collection, wherein the vehicle damage image collection includes: a plurality of vehicle damage images captured by a camera device under different shooting conditions; correspondingly, the acquisition is performed on the preset damage area based on a physical detection method The scanned physical attribute information includes: acquiring a physical attribute information set for the preset damage area, wherein the physical attribute information set includes: multiple physical attribute information scanned under different shooting conditions using physical detection methods . Optionally, when the computer-executable instructions are executed, the damage labeling of the car damage image according to the physical attribute information to generate car damage sample data for training a car damage recognition model includes: The physical attribute information to determine the damage of each pixel in the car damage image; determining the damage of each pixel and the car damage image as car damage sample data for training a car damage recognition model . Optionally, when a computer-executable instruction is executed, the vehicle damage image of the preset damage area on the target vehicle includes: multiple vehicle damage images captured under different shooting conditions; Attribute information to determine the damage of each pixel in the car damage image, including: For each of the car damage images, based on the physical attribute information obtained under the shooting conditions corresponding to the car damage image, Determine the damage of each pixel in the car damage image. Optionally, when the computer-executable instructions are executed, after generating the car damage sample data for training the car damage recognition model, the method further includes: inputting the car damage sample data into a machine learning model based on a supervised learning mode Utilizing a machine learning method and training the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model. Optionally, when the computer-executable instructions are executed, the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other visions that affect the damage area At least one of the characteristic on-site environmental factors. Optionally, when the computer-executable instructions are executed, the relative position of the camera device and the target vehicle is obtained by controlling the movement of the target vehicle based on a positioning device with a centimeter-level precise positioning capability. The image tagging device in one or more embodiments of the present specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attributes obtained by scanning the preset damage area based on a physical detection method Information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model. Further, corresponding to the methods shown in FIGS. 2 to 8 above, based on the same technical concept, one or more embodiments of this specification also provide a storage medium for storing computer executable instructions, in a specific embodiment The storage medium can be a USB flash drive, optical disc, hard drive, etc. When the computer executable instructions stored in the storage medium are executed by the processor, the following process can be achieved: Acquire the preset damage on the target vehicle captured by the camera device A car damage image of the area; and, acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method; labeling the car damage image according to the physical attribute information to generate damage for training Sample data of the car damage identification model. In one or more embodiments of this specification, by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, and there is no need to manually manually map the car damage map. Like the damage labeling, it can also achieve the pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the labeling of the car damage image, which can provide a large amount of precision for the process of model training based on deep learning. Annotate sample data in order to train a car damage recognition model with higher recognition accuracy. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the acquiring physical attribute information obtained by scanning the predetermined damage area based on a physical detection method includes: acquiring a laser light radar device Three-dimensional depth information obtained by scanning the preset damage area; and/or acquiring surface thermal imaging information obtained by scanning the preset damage area using an infrared thermal imaging device. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the acquiring of the damaged image of the preset damage area on the target vehicle captured by the camera device includes: acquiring the Set a set of car damage images in the damaged area, wherein the set of car damage images includes: multiple car damage images captured under different shooting conditions using a camera device; correspondingly, the acquisition is based on physical detection methods The physical attribute information obtained by scanning the preset damage area includes: acquiring a physical attribute information set for the preset damage area, wherein the physical attribute information set includes: scanning under different shooting conditions using a physical detection method Multiple physical attribute information obtained. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the vehicle damage image is marked according to the physical attribute information to generate a vehicle damage for training a vehicle damage recognition model Sample data, including: determining the damage of each pixel in the car damage image based on the physical attribute information; determining the damage of each pixel and the car damage image as training damage Identify the vehicle sample data of the model. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the car damage images of the preset damage area on the target vehicle include: multiple car damage images captured under different shooting conditions The determining of the damage of each pixel in the car damage image based on the physical attribute information includes: For each of the car damage images, based on the shooting conditions corresponding to the car damage image The physical attribute information determines the damage of each pixel in the car damage image. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, after generating vehicle damage sample data for training the vehicle damage identification model, the method further includes: inputting the vehicle damage sample data to the A machine learning model of a supervised learning mode; a machine learning method is used to train the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model. Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment , And at least one of other on-site environmental factors that affect the visual characteristics of the damaged area. Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the relative position of the camera device and the target vehicle is based on the positioning device with centimeter-level precise positioning capability to control the movement of the target vehicle owned. When the computer-executable instructions stored in the storage medium in one or more embodiments of this specification are executed by the processor, the vehicle damage image of the preset damage area captured by the camera device is obtained; and, based on the physical detection method, The physical property information obtained by scanning the preset damage area; according to the physical property information, mark the damage of the car damage image to generate car damage sample data. By combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method, the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image. Car damage images are marked with pixel-level damage, which improves the efficiency and accuracy of car damage image annotation, which can provide a large amount of accurate sample data for the model training process based on deep learning, so that the training can be recognized accurately A higher degree of damage recognition model. In the 1990s, the improvement of a technology can be clearly distinguished from the improvement of hardware (for example, the improvement of the circuit structure of diodes, transistors, switches, etc.) or the improvement of software (for the process flow Improve). However, with the development of technology, the improvement of many methods and processes can be regarded as a direct improvement of the hardware circuit structure. Designers almost get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method and process cannot be achieved with hardware physical modules. For example, a programmable logic device (Programmable Logic Device, PLD) (such as a field programmable gate array (Field Programmable Gate Array, FPGA)) is such an integrated circuit whose logic function is determined by the user programming the device. It is up to the designer to program a digital system to “integrate” a PLD without having to ask the chip manufacturer to design and manufacture a dedicated integrated circuit chip. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, this kind of programming is also mostly implemented using "logic compiler" software, which is similar to the software compiler used in program development and writing. The previous source code must also be written in a specific programming language, which is called the hardware description language (Hardware Description Language, HDL), and HDL is not only one, but there are many, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HD Cal, JHDL (Java Hardware Description Language), Lava, Lola, My HDL, PALASM, RHDL (Ruby Hardware Description Language), etc., Currently the most commonly used are VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. Those skilled in the art should also be clear that by simply programming the method flow in the above hardware description languages and programming it into the integrated circuit, the hardware circuit that implements the logic method flow can be easily obtained. The controller can be implemented in any suitable way, for example, the controller can take, for example, a microprocessor or processor and a computer-readable program code (such as software or firmware) that can be executed by the (micro)processor Media, logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in a purely computer-readable program code, the method can be logically programmed to make the controller be controlled by logic gates, switches, application-specific integrated circuits, and programmable logic To achieve the same function. Therefore, such a controller can be regarded as a hardware component, and the device for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module of the implementation method and a structure in the hardware component. The system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function. A typical implementation device is a computer. Specifically, the computer may be, for example, a personal computer, a notebook computer, a mobile phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices. For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing one or more of this specification, the functions of each unit may be implemented in one or more software and/or hardware. Those skilled in the art should understand that one or more embodiments of this specification may be provided as a method, system, or computer program product. Therefore, one or more of the description may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. In addition, one or more of this specification may use a computer implemented on one or more computer-usable storage media (including but not limited to disk memory, CD-ROM, optical memory, etc.) containing computer-usable program code Program product form. One or more of this specification are described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to one or more embodiments of this specification. It should be understood that each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing device are generated A device for realizing the functions specified in one block or multiple blocks in one flow or multiple flows in a flowchart and/or one block or multiple blocks in a block diagram. These computer program instructions can also be stored in a computer readable memory that can guide the computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device, The instruction device implements the functions specified in one block or multiple blocks in one flow or multiple flows in the flowchart and/or one block in the block diagram. These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps can be performed on the computer or other programmable device to generate computer-implemented processing, which can be executed on the computer or other programmable device The instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams. In a typical configuration, the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. Memory may include non-permanent memory, random access memory (RAM) and/or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory ( flash RAM). Memory is an example of computer-readable media. Computer-readable media, including permanent and non-permanent, removable and non-removable media, can be stored by any method or technology. The information can be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM) , Read-only memory (ROM), electrically erasable and programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only discs (CD-ROM), digital versatile discs (DVD ) Or other optical storage, magnetic cassette tape, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media, can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves. It should also be noted that the terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or device that includes a series of elements includes not only those elements, but also includes Other elements not explicitly listed, or include elements inherent to this process, method, commodity, or equipment. Without more restrictions, the element defined by the sentence "include one..." does not exclude that there are other identical elements in the process, method, commodity, or equipment that includes the element. Those skilled in the art should understand that one or more embodiments of this specification may be provided as a method, system, or computer program product. Therefore, one or more of this specification may take the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware aspects. In addition, one or more of this specification may use a computer implemented on one or more computer-usable storage media (including but not limited to disk memory, CD-ROM, optical memory, etc.) containing computer-usable program code Program product form. One or more of this specification can be described in the general context of computer-executable instructions executed by a computer, such as a program module. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. One or more of this specification can also be practiced in a distributed computing environment in which tasks are performed by remote processing devices connected via a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices. The embodiments in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method embodiment. The above is only one or more embodiments of this specification, and is not intended to limit one or more of this specification. For those skilled in the art, one or more of this specification may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more of this specification shall be included in the scope of one or more patent applications of this specification.

S201~S203:步驟 S2021~S2031:步驟 S2011~S20311:步驟 901:第一獲取模組 902:第二獲取模組 903:圖像標註模組 904:模型訓練模組 10:攝像裝置 20:物理探測裝置 30:圖像標註裝置 1101:處理器 1102:記憶體 1103:電源 1104:有線或無線網路介面 1105:輸入輸出介面 1106:鍵盤S201~S203: Steps S2021~S2031: Steps S2011~S20311: Steps 901: The first acquisition module 902: Second acquisition module 903: Image annotation module 904: Model training module 10: Camera device 20: Physical detection device 30: Image annotation device 1101: processor 1102: Memory 1103: Power supply 1104: Wired or wireless network interface 1105: I/O interface 1106: Keyboard

為了更清楚地說明本說明書一個或多個實施例或現有技術中的技術方案,下面將對實施例或現有技術描述中所需要使用的圖式作簡單地介紹,顯而易見地,下面描述中的圖式僅僅是本說明書一個或多個中記載的一些實施例,對於本領域具有通常知識者來講,在不付出創造性勞動性的前提下,還可以根據這些圖式獲得其他的圖式。 圖1為本說明書一個或多個實施例提供的圖像標註方法的第一種應用場景示意圖; 圖2為本說明書一個或多個實施例提供的圖像標註方法的第一種流程示意圖; 圖3為本說明書一個或多個實施例提供的圖像標註方法的第二種流程示意圖; 圖4a為本說明書一個或多個實施例提供的圖像標註方法中雷射光雷達裝置採集三維表面圖的實現原理示意圖; 圖4b為本說明書一個或多個實施例提供的圖像標註方法中紅外線熱成像裝置採集表面熱成像圖的實現原理示意圖; 圖5為本說明書一個或多個實施例提供的圖像標註方法的第三種流程示意圖; 圖6a為本說明書一個或多個實施例提供的圖像標註方法的第二種應用場景示意圖; 圖6b為本說明書一個或多個實施例提供的圖像標註方法的第三種應用場景示意圖; 圖7為本說明書一個或多個實施例提供的圖像標註方法的第四種流程示意圖; 圖8為本說明書一個或多個實施例提供的圖像標註方法的第五種流程示意圖; 圖9a為本說明書一個或多個實施例提供的圖像標註裝置的第一種模組組成示意圖; 圖9b為本說明書一個或多個實施例提供的圖像標註裝置的第二種模組組成示意圖; 圖10為本說明書一個或多個實施例提供的圖像標註系統的具體結構示意圖; 圖11為本說明書一個或多個實施例提供的圖像標註設備的具體結構示意圖。In order to more clearly explain one or more embodiments of the specification or the technical solutions in the prior art, the following briefly introduces the drawings used in the embodiments or the description of the prior art. Obviously, the drawings in the following description The formulas are only some of the embodiments described in one or more of this specification. For those with ordinary knowledge in the art, without paying any creative labor, other patterns can be obtained based on these patterns. 1 is a schematic diagram of a first application scenario of an image annotation method provided by one or more embodiments of this specification; 2 is a first schematic flowchart of an image tagging method provided by one or more embodiments of this specification; FIG. 3 is a second schematic flowchart of an image annotation method provided by one or more embodiments of this specification; 4a is a schematic diagram of an implementation principle of a three-dimensional surface map collected by a laser radar device in an image annotation method provided by one or more embodiments of this specification; 4b is a schematic diagram of an implementation principle of an infrared thermal imaging device acquiring a surface thermal imaging image in an image annotation method provided by one or more embodiments of this specification; FIG. 5 is a third schematic flowchart of an image annotation method provided by one or more embodiments of this specification; 6a is a schematic diagram of a second application scenario of an image annotation method provided by one or more embodiments of this specification; 6b is a schematic diagram of a third application scenario of an image annotation method provided by one or more embodiments of this specification; 7 is a fourth schematic flowchart of an image annotation method provided by one or more embodiments of this specification; FIG. 8 is a fifth schematic flowchart of an image annotation method provided by one or more embodiments of this specification; 9a is a schematic diagram of the composition of a first module of an image annotation device provided by one or more embodiments of this specification; 9b is a schematic diagram of the composition of a second module of an image tagging device provided by one or more embodiments of this specification; 10 is a schematic diagram of a specific structure of an image annotation system provided by one or more embodiments of this specification; FIG. 11 is a schematic diagram of a specific structure of an image tagging device provided by one or more embodiments of the present specification.

Claims (23)

一種圖像標註方法,其特徵在於,包括: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊; 根據該物理屬性資訊對該車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。 An image annotation method, characterized in that it includes: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Obtain physical property information obtained by scanning the preset damage area based on physical detection methods; According to the physical attribute information, the damage image of the vehicle damage image is labeled, and the vehicle damage sample data for training the vehicle damage recognition model is generated. 根據申請專利範圍第1項所述的方法,其中,該獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取利用雷射光雷達裝置針對該預設損傷區域進行掃描得到的三維深度資訊; 和/或, 獲取利用紅外線熱成像裝置針對該預設損傷區域進行掃描得到的表面熱成像資訊。The method according to item 1 of the patent application scope, wherein the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes: Obtain the three-dimensional depth information obtained by scanning the preset damage area using a laser radar device; and / or, Obtain surface thermal imaging information obtained by scanning the preset damage area using an infrared thermal imaging device. 根據申請專利範圍第1項所述的方法,其中,該獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像,包括: 獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,該車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 對應的,該獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊,包括: 獲取針對該預設損傷區域的物理屬性資訊集合,其中,該物理屬性資訊集合包括:利用物理探測方式在不同拍攝條件下掃描得到的多個物理屬性資訊。The method according to item 1 of the scope of the patent application, wherein the acquisition of the vehicle damage image of the preset damage area on the target vehicle captured by the camera device includes: Acquire a set of car damage images for a preset damage area on the target vehicle, where the set of car damage images includes: multiple car damage images captured by the camera device under different shooting conditions; Correspondingly, the acquiring physical attribute information obtained by scanning the preset damage area based on the physical detection method includes: Obtain a physical attribute information set for the preset damage area, where the physical attribute information set includes: multiple physical attribute information scanned under different shooting conditions using a physical detection method. 根據申請專利範圍第1項所述的方法,其中,該根據該物理屬性資訊對該車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料,包括: 根據該物理屬性資訊,確定該車損圖像中各像素點的損傷情況; 將該各像素點的損傷情況和該車損圖像確定為用於訓練車損識別模型的車損樣本資料。The method according to item 1 of the patent application scope, wherein the damage annotation is performed on the vehicle damage image according to the physical attribute information to generate vehicle damage sample data for training the vehicle damage recognition model, including: According to the physical attribute information, determine the damage of each pixel in the car damage image; The damage situation of each pixel and the vehicle damage image are determined as vehicle damage sample data for training the vehicle damage recognition model. 根據申請專利範圍第4項所述的方法,其中,該目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像; 該根據該物理屬性資訊,確定該車損圖像中各像素點的損傷情況,包括: 針對每張該車損圖像,根據在該車損圖像對應的拍攝條件下得到的該物理屬性資訊,確定該車損圖像中各像素點的損傷情況。The method according to item 4 of the patent application scope, wherein the car damage image of the preset damage area on the target vehicle includes: multiple car damage images captured under different shooting conditions; According to the physical attribute information, the damage of each pixel in the car damage image is determined, including: For each of the car damage images, the damage of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image. 根據申請專利範圍第1項所述的方法,其中,在產生用於訓練車損識別模型的車損樣本資料之後,還包括: 將該車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於該車損樣本資料對該機器學習模型進行訓練,得到車損識別模型。The method according to item 1 of the patent application scope, wherein, after generating the vehicle damage sample data for training the vehicle damage identification model, the method further includes: Input the vehicle damage sample data into a machine learning model based on supervised learning mode; Using the machine learning method and training the machine learning model based on the vehicle damage sample data, a vehicle damage recognition model is obtained. 根據申請專利範圍第3項所述的方法,其中,該拍攝條件包括:該攝像裝置的拍攝方位、該攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種。The method according to item 3 of the patent application scope, wherein the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other factors that affect the visual characteristics of the damaged area At least one of on-site environmental factors. 根據申請專利範圍第7項所述的方法,其中,該攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對該目標車輛的移動進行控制得到的。The method according to item 7 of the patent application scope, wherein the relative position of the camera device and the target vehicle is obtained based on the positioning device with a centimeter-level precise positioning capability to control the movement of the target vehicle. 一種圖像標註裝置,其特徵在於,包括: 第一獲取模組,用於獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 第二獲取模組,用於獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊; 圖像標註模組,用於根據該物理屬性資訊對該車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。An image tagging device is characterized by comprising: A first acquisition module, configured to acquire a car damage image of a preset damage area on the target vehicle captured by the camera device; and, A second acquiring module, configured to acquire physical attribute information obtained by scanning the preset damage area based on a physical detection method; The image labeling module is used to mark the damage of the car damage image according to the physical attribute information, and generate car damage sample data for training the car damage recognition model. 根據申請專利範圍第9項所述的裝置,其中,該第二獲取模組,具體用於: 獲取利用雷射光雷達裝置針對該預設損傷區域進行掃描得到的三維深度資訊; 和/或, 獲取利用紅外線熱成像裝置針對該預設損傷區域進行掃描得到的表面熱成像資訊。The device according to item 9 of the patent application scope, wherein the second acquisition module is specifically used for: Obtain the three-dimensional depth information obtained by scanning the preset damage area using a laser radar device; and / or, Obtain surface thermal imaging information obtained by scanning the preset damage area using an infrared thermal imaging device. 根據申請專利範圍第9項所述的裝置,其中,該第一獲取模組,具體用於: 獲取針對目標車輛上預設損傷區域的車損圖像集合,其中,該車損圖像集合包括:利用攝像裝置在不同拍攝條件下拍攝得到的多張車損圖像; 對應的,該第二獲取模組,具體用於: 獲取針對該預設損傷區域的物理屬性資訊集合,其中,該物理屬性資訊集合包括:利用物理探測方式在不同拍攝條件下掃描得到的多個物理屬性資訊。The device according to item 9 of the patent application scope, wherein the first acquisition module is specifically used for: Acquire a set of car damage images for a preset damage area on the target vehicle, where the set of car damage images includes: multiple car damage images captured by the camera device under different shooting conditions; Correspondingly, the second acquisition module is specifically used for: Obtain a physical attribute information set for the preset damage area, where the physical attribute information set includes: multiple physical attribute information scanned under different shooting conditions using a physical detection method. 根據申請專利範圍第9項所述的裝置,其中,該圖像標註模組,具體用於: 根據該物理屬性資訊,確定該車損圖像中各像素點的損傷情況; 將該各像素點的損傷情況和該車損圖像確定為用於訓練車損識別模型的車損樣本資料。The device according to item 9 of the patent application scope, wherein the image annotation module is specifically used for: According to the physical attribute information, determine the damage of each pixel in the car damage image; The damage situation of each pixel and the vehicle damage image are determined as vehicle damage sample data for training the vehicle damage recognition model. 根據申請專利範圍第12項所述的裝置,其中,該目標車輛上預設損傷區域的車損圖像包括:在不同拍攝條件下拍攝得到的多張車損圖像; 該圖像標註模組,進一步具體用於: 針對每張該車損圖像,根據在該車損圖像對應的拍攝條件下得到的該物理屬性資訊,確定該車損圖像中各像素點的損傷情況。The device according to item 12 of the patent application scope, wherein the car damage image of the preset damage area on the target vehicle includes: multiple car damage images captured under different shooting conditions; The image annotation module is further specifically used for: For each of the car damage images, the damage of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image. 根據申請專利範圍第9項所述的裝置,其中,該裝置還包括模型訓練模組,用於: 在產生用於訓練車損識別模型的車損樣本資料之後,將該車損樣本資料輸入至基於有監督學習模式的機器學習模型; 利用機器學習方法並基於該車損樣本資料對該機器學習模型進行訓練,得到車損識別模型。The device according to item 9 of the patent application scope, wherein the device further includes a model training module for: After generating the car damage sample data for training the car damage recognition model, input the car damage sample data to the machine learning model based on the supervised learning mode; Using the machine learning method and training the machine learning model based on the vehicle damage sample data, a vehicle damage recognition model is obtained. 根據申請專利範圍第11項所述的裝置,其中,該拍攝條件包括:該攝像裝置的拍攝方位、該攝像裝置與目標車輛的相對位置、拍攝環境的光照參數、以及其他影響損傷區域視覺特徵的現場環境因素中至少一種。The device according to item 11 of the patent application scope, wherein the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other factors that affect the visual characteristics of the damaged area At least one of on-site environmental factors. 根據申請專利範圍第15項所述的裝置,其中,該攝像裝置與目標車輛的相對位置是基於具備釐米級精確定位能力的定位裝置對該目標車輛的移動進行控制得到的。The device according to item 15 of the patent application scope, wherein the relative position of the camera device and the target vehicle is obtained based on the positioning device with a centimeter-level precise positioning capability to control the movement of the target vehicle. 一種圖像標註系統,其特徵在於,該系統包括:攝像裝置、物理探測裝置和如申請專利範圍第9至16項中任一項所述的圖像標註裝置; 其中,該攝像裝置和該物理探測裝置均與該圖像標註裝置相連接; 該攝像裝置,用於對目標車輛上預設損傷區域進行拍攝得到的車損圖像,並將該車損圖像傳輸至該圖像標註裝置; 該物理探測裝置,用於基於物理探測方式對該預設損傷區域進行掃描得到的物理屬性資訊,並將該物理屬性資訊傳輸至該圖像標註裝置; 該圖像標註裝置,用於接收該車損圖像和該圖像標註裝置,並根據該車損圖像和該圖像標註裝置產生用於訓練車損識別模型的車損樣本資料。An image annotation system, characterized in that the system includes: a camera device, a physical detection device, and an image annotation device as described in any one of items 9 to 16 of the patent application scope; Among them, the camera device and the physical detection device are connected to the image tagging device; The camera device is used for shooting a car damage image obtained from a preset damage area on a target vehicle, and transmitting the car damage image to the image annotation device; The physical detection device is configured to scan physical attribute information obtained by scanning the preset damage area based on a physical detection method, and transmit the physical attribute information to the image annotation device; The image tagging device is used to receive the car damage image and the image tagging device, and generate car damage sample data for training a car damage recognition model according to the car damage image and the image tagging device. 根據申請專利範圍第17項所述的系統,其中,該物理探測裝置包括:雷射光雷達裝置、和/或紅外線熱成像裝置; 其中,該雷射光雷達裝置,用於利用雷射光光束對該預設損傷區域進行掃描得到三維深度資訊,並將該三維深度資訊傳輸至該圖像標註裝置; 該紅外線熱成像裝置,用於利用紅外線對該預設損傷區域進行掃描得到表面熱成像資訊,並將該表面熱成像資訊傳輸至該圖像標註裝置。The system according to item 17 of the patent application scope, wherein the physical detection device includes: a laser light radar device and/or an infrared thermal imaging device; Wherein, the laser light radar device is used to scan the preset damage area with laser light beam to obtain three-dimensional depth information, and transmit the three-dimensional depth information to the image annotation device; The infrared thermal imaging device is used for scanning the preset damaged area by infrared rays to obtain surface thermal imaging information, and transmitting the surface thermal imaging information to the image annotation device. 根據申請專利範圍第17項所述的系統,其中,該系統還包括:可調節雲台; 其中,該攝像裝置和該物理探測裝置均設置於該可調節雲台上,且該攝像裝置與該物理探測裝置的相對位置保持不變; 該可調節雲台,用於調整該攝像裝置和該物理探測裝置的拍攝條件; 該攝像裝置,用於在不同拍攝條件下對目標車輛上預設損傷區域進行拍攝得到多張車損圖像,並將該多張車損圖像傳輸至該圖像標註裝置; 該物理探測裝置,用於利用物理探測方式在不同拍攝條件下對該預設損傷區域的進行掃描得到多份物理屬性資訊,並將該多份物理屬性資訊傳輸至該圖像標註裝置。The system according to item 17 of the patent application scope, wherein the system further includes: an adjustable pan/tilt; Wherein, the camera device and the physical detection device are both provided on the adjustable gimbal, and the relative positions of the camera device and the physical detection device remain unchanged; The adjustable gimbal is used to adjust the shooting conditions of the camera device and the physical detection device; The camera device is used to shoot a plurality of vehicle damage images under different shooting conditions on a preset damage area on a target vehicle, and transmit the plurality of vehicle damage images to the image annotation device; The physical detection device is used for scanning the preset damaged area under different shooting conditions by physical detection to obtain multiple pieces of physical attribute information, and transmitting the multiple pieces of physical attribute information to the image annotation device. 根據申請專利範圍第19項所述的系統,其中,該系統還包括:光照調節裝置; 該光照調節裝置,用於調整該攝像裝置所在的拍攝環境的光照參數。The system according to item 19 of the patent application scope, wherein the system further includes: a light adjustment device; The lighting adjustment device is used to adjust the lighting parameters of the shooting environment where the camera device is located. 根據申請專利範圍第19項所述的系統,其中,該系統還包括:具備釐米級精確定位能力的定位裝置; 該定位裝置,用於對該攝像裝置與該目標車輛的相對位置進行定位。The system according to item 19 of the patent application scope, wherein the system further includes: a positioning device with centimeter-level precise positioning capability; The positioning device is used for positioning the relative position of the imaging device and the target vehicle. 一種圖像標註設備,其特徵在於,包括: 處理器;以及 被安排成儲存電腦可執行指令的記憶體,該可執行指令在被執行時使該處理器: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊; 根據該物理屬性資訊對該車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。An image annotation device is characterized by comprising: Processor; and A memory arranged to store computer executable instructions that when executed causes the processor to: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Obtain physical property information obtained by scanning the preset damage area based on physical detection methods; According to the physical attribute information, the damage image of the vehicle damage image is labeled, and the vehicle damage sample data for training the vehicle damage recognition model is generated. 一種儲存媒體,用於儲存電腦可執行指令,其特徵在於,該可執行指令在被執行時實現以下流程: 獲取利用攝像裝置拍攝得到的目標車輛上預設損傷區域的車損圖像;以及, 獲取基於物理探測方式針對該預設損傷區域進行掃描得到的物理屬性資訊; 根據該物理屬性資訊對該車損圖像進行損傷標註,產生用於訓練車損識別模型的車損樣本資料。A storage medium is used to store computer executable instructions, which is characterized in that when the executable instructions are executed, the following process is implemented: Obtaining a car damage image of the preset damage area on the target vehicle captured by the camera device; and, Obtain physical property information obtained by scanning the preset damage area based on physical detection methods; According to the physical attribute information, the damage image of the vehicle damage image is labeled, and the vehicle damage sample data for training the vehicle damage recognition model is generated.
TW108110062A 2018-10-31 2019-03-22 Image labeling method, device, and system TW202018664A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811282580.2 2018-10-31
CN201811282580.2A CN109615649A (en) 2018-10-31 2018-10-31 A kind of image labeling method, apparatus and system

Publications (1)

Publication Number Publication Date
TW202018664A true TW202018664A (en) 2020-05-16

Family

ID=66002877

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108110062A TW202018664A (en) 2018-10-31 2019-03-22 Image labeling method, device, and system

Country Status (3)

Country Link
CN (1) CN109615649A (en)
TW (1) TW202018664A (en)
WO (1) WO2020088076A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615649A (en) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 A kind of image labeling method, apparatus and system
CN110148013B (en) * 2019-04-22 2023-01-24 创新先进技术有限公司 User label distribution prediction method, device and system
CN110263190B (en) * 2019-05-06 2023-10-20 菜鸟智能物流控股有限公司 Data processing method, device, equipment and machine-readable medium
US10783643B1 (en) 2019-05-27 2020-09-22 Alibaba Group Holding Limited Segmentation-based damage detection
CN110264444B (en) * 2019-05-27 2020-07-17 阿里巴巴集团控股有限公司 Damage detection method and device based on weak segmentation
CN110490960B (en) * 2019-07-11 2023-04-07 创新先进技术有限公司 Synthetic image generation method and device
CN112307236A (en) * 2019-07-24 2021-02-02 阿里巴巴集团控股有限公司 Data labeling method and device
CN112434548B (en) * 2019-08-26 2024-06-04 杭州海康威视数字技术股份有限公司 Video labeling method and device
CN110503705B (en) * 2019-08-29 2023-10-17 上海鹰瞳医疗科技有限公司 Image labeling method and device
CN112528710B (en) * 2019-09-19 2024-04-09 上海海拉电子有限公司 Road surface detection method and device, electronic equipment and storage medium
CN111523615B (en) * 2020-05-08 2024-03-26 北京深智恒际科技有限公司 Assembly line closed-loop flow method for realizing vehicle appearance professional damage labeling
CN111724371B (en) * 2020-06-19 2023-05-23 联想(北京)有限公司 Data processing method and device and electronic equipment
CN111767862A (en) * 2020-06-30 2020-10-13 广州文远知行科技有限公司 Vehicle labeling method and device, computer equipment and readable storage medium
CN112712121B (en) * 2020-12-30 2023-12-05 浙江智慧视频安防创新中心有限公司 Image recognition model training method, device and storage medium
CN113706552A (en) * 2021-07-27 2021-11-26 北京三快在线科技有限公司 Method and device for generating semantic segmentation marking data of laser reflectivity base map
CN113658345A (en) * 2021-08-18 2021-11-16 杭州海康威视数字技术股份有限公司 Sample labeling method and device
CN114140430A (en) * 2021-11-30 2022-03-04 北京比特易湃信息技术有限公司 Vehicle damage reporting method based on deep learning
CN114972810B (en) * 2022-03-28 2023-11-28 慧之安信息技术股份有限公司 Image acquisition labeling method based on deep learning
CN114965487B (en) * 2022-06-10 2024-06-14 招商局重庆交通科研设计院有限公司 Calibration method and device of automatic monitoring equipment for tunnel typical damage

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3038094B1 (en) * 2015-06-24 2018-08-31 Sidexa DOCUMENTARY MANAGEMENT FOR AUTOMOBILE REPAIR
CN106780048A (en) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 A kind of self-service Claims Resolution method of intelligent vehicle insurance, self-service Claims Resolution apparatus and system
CN106874840B (en) * 2016-12-30 2019-10-22 东软集团股份有限公司 Vehicle information recognition method and device
CN111914692B (en) * 2017-04-28 2023-07-14 创新先进技术有限公司 Method and device for acquiring damage assessment image of vehicle
CN107194398B (en) * 2017-05-10 2018-09-25 平安科技(深圳)有限公司 Vehicle damages recognition methods and the system at position
CN108171708B (en) * 2018-01-24 2021-04-30 北京威远图易数字科技有限公司 Vehicle damage assessment method and system
CN108550080A (en) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 Article damage identification method and device
CN108710881B (en) * 2018-05-23 2020-12-29 中国民用航空总局第二研究所 Neural network model, candidate target area generation method and model training method
CN109615649A (en) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 A kind of image labeling method, apparatus and system

Also Published As

Publication number Publication date
WO2020088076A1 (en) 2020-05-07
CN109615649A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
TW202018664A (en) Image labeling method, device, and system
US11754392B2 (en) Distance determination of a sample plane in a microscope system
US10558029B2 (en) System for image reconstruction using a known pattern
CN100487724C (en) Quick target identification and positioning system and method
US20200089954A1 (en) Generating synthetic digital assets for a virtual scene including a model of a real-world object
JP2019513274A (en) System and method for installation, identification and counting of goods
US20110268322A1 (en) Establishing coordinate systems for measurement
CN105266845A (en) Apparatus and method for supporting computer aided diagnosis (cad) based on probe speed
CN113116377B (en) Ultrasonic imaging navigation method, ultrasonic equipment and storage medium
KR20200103837A (en) An apparatus and method for passive scanning of an object or scene (AN APPARATUS AND A METHOD FOR PASSIVE SCANNING OF AN OBJECT OR A SCENE)
Liu et al. Efficient optical measurement of welding studs with normal maps and convolutional neural network
CN101907490A (en) Method for measuring small facula intensity distribution based on two-dimension subdivision method
CN107764204A (en) Based on the microscopical three-dimensional surface topography instrument of mating plate and 3-D view joining method
Yu et al. Teat detection of dairy cows based on deep learning neural network FS-YOLOv4 model
US8908084B2 (en) Electronic device and method for focusing and measuring points of objects
Li et al. Deep learning-based interference fringes detection using convolutional neural network
CN105025219A (en) Image acquisition method
US9652081B2 (en) Optical touch system, method of touch detection, and computer program product
CN115205806A (en) Method and device for generating target detection model and automatic driving vehicle
CN103024259A (en) Imaging apparatus and control method of imaging apparatus
Ogun et al. An active three-dimensional vision system for automated detection and measurement of surface defects
KR102543172B1 (en) Method and system for collecting data for skin diagnosis based on artificail intellience through user terminal
Bi et al. Camera calibration method in specific bands for the near-infrared dynamic navigator
TW201533513A (en) Spherical lighting device with backlighting coronal ring
CN207622767U (en) Object positioning system