WO2019214319A1 - 一种车辆定损的数据处理方法、装置、处理设备及客户端 - Google Patents

一种车辆定损的数据处理方法、装置、处理设备及客户端 Download PDF

Info

Publication number
WO2019214319A1
WO2019214319A1 PCT/CN2019/076028 CN2019076028W WO2019214319A1 WO 2019214319 A1 WO2019214319 A1 WO 2019214319A1 CN 2019076028 W CN2019076028 W CN 2019076028W WO 2019214319 A1 WO2019214319 A1 WO 2019214319A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
shooting
area
vehicle
photographing
Prior art date
Application number
PCT/CN2019/076028
Other languages
English (en)
French (fr)
Inventor
周凡
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019214319A1 publication Critical patent/WO2019214319A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Definitions

  • the embodiment of the present specification belongs to the technical field of computer terminal insurance service data processing, and in particular, to a data processing method, device, processing device and client for vehicle damage.
  • Motor vehicle insurance that is, automobile insurance (or car insurance) refers to a type of commercial insurance that is liable for personal injury or property damage caused by natural disasters or accidents. With the development of the economy, the number of motor vehicles is increasing. At present, auto insurance has become one of the biggest insurances in China's property insurance business.
  • the current evaluation methods mainly include: conducting an on-site assessment of the vehicle in which the accident occurred through an insurance company or a third-party public assessment agency, or taking pictures of the accident vehicle under the guidance of the insurance company personnel, and transmitting it to the insurance company through the network. Then, the loss maker will remotely determine the damage through the photo.
  • the insurance company arranges the vehicle and the personnel to go to the scene of the accident to conduct the survey, which requires a relatively high cost; the owner needs to spend more time waiting for the survey personnel to arrive at the scene, and the experience is poor;
  • the owner takes photos on his own, due to lack of experience, it is often necessary for the survey personnel to provide guidance through remote telephone or video call, which is time consuming and laborious.
  • remote guidance by the personnel Even in the case of remote guidance by the personnel, some of the photos taken in this way have a large number of invalid photos.
  • an invalid fixed-loss image is collected, the owner of the vehicle needs to re-shoot, or even loses the shooting opportunity, which seriously affects the timing. Loss processing efficiency and user-defined service experience.
  • the embodiment of the present specification aims to provide a data processing method, device, processing device and client for vehicle damage, which can automatically identify a damaged part of a vehicle on a mobile device and identify the need in an easy-to-identify manner in the shooting screen.
  • the shooting area continuously guides the user to take photos or videos of the area, so that the user can complete the shooting required to meet the fixed loss processing requirements without the need of professional knowledge, and improve the processing efficiency of the vehicle's loss. To improve the user's fixed loss interactive experience.
  • the data processing method, device, processing device and client for the vehicle loss determination provided by the embodiments of the present specification are implemented by the following methods:
  • a data processing method for vehicle damage comprising:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • a data processing device for vehicle damage comprising:
  • a first prompting module configured to display shooting guide information of the first damaged area of the photographing vehicle
  • a damage identification result module configured to determine a first damage area of the first damage if a first damage is detected in the current photographing window
  • a display module is configured to superimpose the rendered first damage area in the current shooting window by using virtual reality after the first damage area is rendered in a significant manner;
  • a second prompting module is configured to display shooting guide information for the first damaged area.
  • a data processing device for vehicle damage comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • a client comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions that, when executed by the processor, implement the method steps of any one of the embodiments.
  • the data processing method, device, processing device and client of the vehicle loss determination provided by the embodiment of the present specification can automatically identify the damaged part of the vehicle on the mobile device, and identify the need to shoot in an easy-to-identify manner in the shooting screen.
  • the area continuously guides the user to take photos or videos of the area, so that the user can complete the shooting required to meet the fixed loss processing requirements without the need of professional knowledge, and improve the processing efficiency of the vehicle's damage. Improve the user's fixed loss interactive experience.
  • FIG. 1 is a schematic flow chart of an embodiment of a data processing method for a vehicle loss according to the present specification
  • FIG. 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described in the present specification
  • FIG. 3 is a schematic diagram of the present specification for identifying a damaged area using small dot symbol rendering
  • FIG. 4 is a schematic diagram of an implementation scenario of a shooting guidance embodiment in the method provided by the present specification
  • FIG. 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification.
  • FIG. 6 is a block diagram showing the hardware structure of a client for interactive processing of vehicle damage using the method or apparatus embodiment of the present invention
  • FIG. 7 is a block diagram showing a module structure of an embodiment of a data processing apparatus for vehicle damage provided by the present specification
  • FIG. 8 is a schematic structural diagram of an embodiment of an electronic device provided by the present specification.
  • the client may include a terminal device with a shooting function, such as a smart phone or a tablet computer, used by a vehicle loss site personnel (which may be an accident vehicle owner user or an insurance company personnel or other personnel performing a loss processing process). Smart wearable devices, dedicated loss terminals, etc.
  • the client may have a communication module, and may communicate with a remote server to implement data transmission with the server.
  • the server may include a server on the insurance company side or a server on the service side of the service provider.
  • Other implementation scenarios may also include servers of other service parties, such as a component supplier that has a communication link with the server of the fixed service provider. Terminal, terminal of vehicle repair shop, etc.
  • the server may include a single computer device, or may include a server cluster composed of a plurality of servers, or a server of a distributed system.
  • the client side can send the image data collected by the live shooting to the server in real time, and the server side performs the damage identification, and the recognition result can be fed back to the client.
  • the processing on the server side, the damage recognition and the like are performed by the server side, and the processing speed is usually higher than the client side, which can reduce the processing pressure of the client and improve the speed of damage recognition.
  • this specification does not exclude that all or part of the above processing in other embodiments is implemented by the client side, such as real-time detection and identification of damage on the client side.
  • the present invention provides a data processing method for a vehicle to be applied to a mobile device, which can identify an area to be photographed in an easy-to-identify manner in the photographing screen, and continuously guide the user to take a photo or video to the area. So that the user can complete the shooting required for the damage without the need for professional knowledge.
  • FIG. 1 is a schematic flowchart diagram of an embodiment of a data processing method for a vehicle loss according to the present disclosure.
  • the present specification provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment or the drawings.
  • S6 Display shooting guidance information for the first damage area.
  • the client on the user side may be a smart phone, and the smart phone may have a shooting function.
  • the user can open the mobile phone application that implements the implementation of the present specification at the scene of the vehicle accident to take a framing shot of the vehicle accident scene.
  • the shooting window can be displayed on the client display, and the vehicle can be photographed through the shooting window.
  • the shooting window may be a video shooting window, which may be used for framing (image capturing) of the vehicle damage scene by the terminal, and image information acquired by the client-integrated camera device may be displayed in the shooting window.
  • the specific interface structure of the shooting window and the related information displayed can be customized.
  • the vehicle's feature data can be acquired during vehicle shooting.
  • the feature data can be specifically set according to data processing requirements such as vehicle identification, environment recognition, and image recognition.
  • the feature data may include data information of each component of the identified vehicle, and may be used to construct 3D coordinate information, and establish an augmented reality space model of the vehicle (AR space model, a data representation mode, and a contour figure of the body) ).
  • AR space model augmented reality space model
  • the feature data may also include other data information such as the brand, model, color, outline, unique identification code of the vehicle.
  • the client When the client enables the loss service, it can display the boot information for shooting the damaged area.
  • the damaged area currently or initially to be photographed is referred to as a first damaged area.
  • the application can prompt the user to shoot at a distance that the vehicle can be seen in a position that is likely to be damaged. If necessary, the user may be prompted to move around the vehicle body. If no damage is found during the initial shooting, the user is prompted to take a full shot of the vehicle in reverse.
  • the damage area corresponding to the damage may be further calculated.
  • the process of damage identification may be performed by the client side or by the server side, and the server at this time may be referred to as a damage identification server.
  • the images collected by the client can be directly identified in the client for damage detection, or other fixed loss data processing, which can reduce network transmission overhead.
  • the process of damage identification can be processed by the server side.
  • the identifying that the first damage exists in the current shooting window may include:
  • S22 Receive a damage recognition result returned by the server, where the damage recognition result includes a processing result obtained by the damage identification server using the pre-trained deep neural network to perform damage identification on the acquired image.
  • the first damage identification in the embodiment is for the current damage recognition process, and the first does not constitute a limitation on the damage recognition process of the images collected by other damages.
  • the client or server side may use a deep neural network constructed in advance or in real time to identify damage in the image, such as damage location, damaged component, damage type, and the like.
  • Deep neural networks can be used for target detection and semantic segmentation. For the input image, find the location of the target in the image.
  • 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described in the specification.
  • Figure 2 depicts a typical deep neural network, Faster R-CNN.
  • a deep neural network can be trained by pre-labeling a large number of pictures of the damaged area, and the damage is given to the pictures of the vehicle's various azimuths and illumination conditions. The extent of the area.
  • a network structure customized for mobile devices may be used, such as based on typical MobileNet, SqueezeNet or its improved network structure, so that the model can be lower in power consumption, less memory, and It runs in a slower processor environment, such as the client's mobile terminal operating environment.
  • the area can be rendered in a significant manner, and the area covered by the rendering damage is superimposed on the captured image by the AR technique.
  • the salient mode rendering mainly refers to the use of some features of the rendering mode to mark the damage area, so that the damage area is easy to identify, or more prominent.
  • the specific rendering manner is not limited, and specific constraints or conditions for achieving rendering in a significant manner may be set.
  • the salient mode rendering may include:
  • S40 Identify the first damage area by using a preset characterization symbol, where the preset characterization symbol includes one of the following:
  • FIG. 3 is a schematic diagram of the present specification for identifying a damaged area using small dot symbol rendering.
  • the preset characterization symbols may also include other forms, such as a guide line, a rule graphic frame, an irregular graphic frame, a customized graphic, etc., and other embodiments may also use text, Characters, data, etc. identify the damaged area and direct the user to take pictures of the damaged area.
  • One or more preset characterization symbols can be used for rendering.
  • the preset characterization symbol is used to identify the damaged area, and the location area where the damage is located can be more clearly displayed in the shooting window, thereby assisting the user in quickly positioning and guiding shooting.
  • a dynamic rendering effect may also be employed to identify the damaged area, and the user is directed to photograph the damaged area in a more obvious manner.
  • the salient mode rendering includes:
  • S400 Perform at least one animation display of color conversion, size conversion, rotation, and jitter on the preset characterization symbol.
  • the boundary of the actual damage may be superimposed on the AR, prompting the user to align the framing frame with the portion of the variable cross-section for shooting.
  • the augmented reality AR generally refers to a technical implementation scheme for calculating the position and angle of the camera image in real time and adding corresponding images, videos, and 3D models, which can put the virtual world on the screen in the real world and Engage.
  • the enhanced information space model constructed by using the feature data in the embodiment of the present specification may be the contour information of the vehicle, and may specifically be based on the acquired model of the vehicle, the shooting angle, and the tire position, the ceiling position, the front face position, and the front of the vehicle.
  • a plurality of feature data such as lamp position, taillight position, front and rear window positions, etc., construct an outline of the vehicle.
  • the contour may include a data model established based on 3D coordinates with corresponding 3D coordinate information.
  • the contour of the build can then be displayed in the capture window.
  • the present specification does not exclude that the augmented reality space model described in other embodiments may also include other model forms or other model information added above the contours.
  • the AR model can be matched with the real vehicle position during the shooting duration, such as superimposing the constructed 3D contour to the contour position of the real vehicle, and the matching can be considered when the two match or the matching degree reaches the threshold.
  • the user can guide the framing direction, and the user aligns the constructed contour with the contour of the captured real vehicle by guiding the moving shooting direction or angle.
  • the embodiment of the present specification in combination with the augmented reality technology, not only displays the real information of the vehicle photographed by the actual client of the user, but also displays the augmented reality space model information of the vehicle that is constructed at the same time, and the two kinds of information complement each other and superimpose, and can provide more Good damage service experience.
  • the shooting window combined with the AR space model can display the scene of the vehicle more intuitively, and can effectively perform the damage and shooting guidance of the vehicle damage position.
  • the client may perform damage recognition guidance in the AR scenario, and the damage recognition guidance may specifically include the presentation guidance information determined based on the image information acquired from the shooting window.
  • the client can obtain image information in the AR scene in the shooting window, analyze and calculate the acquired image information, and determine what shooting guidance information needs to be displayed in the shooting window according to the analysis result. For example, the position of the vehicle in the current shooting window is far away, and the user can be prompted to approach the shooting in the shooting window. If the shooting position is to the left and the tail of the vehicle cannot be captured, the shooting guidance information can be displayed to prompt the user to pan the shooting angle to the right.
  • the damage identification guides the specific processed data information and under what conditions the shooting guidance information is displayed, and the corresponding policies or rules may be preset, which are not described one by one in this embodiment.
  • photographing guidance information for the first lesion area may be displayed.
  • the shooting guidance information that needs to be displayed may be determined according to the current shooting information and the position information of the first damage area. For example, if there is a scratch on the rear fender of the vehicle, and the scratch needs to be photographed in front and in the direction of the scratch, but according to the current shooting position and angle information, the user is inclined at 45 degrees. And farther away from the scratch location. At this point, the user can be prompted to approach the scratch position, prompting the user to shoot in front and in the direction of the scratch.
  • the shooting guide information can be adjusted in real time according to the current view. For example, if the user has already approached the scratch position and meets the shooting requirements, the shooting guide information prompting the user to approach the scratch position may not be displayed.
  • the suspected damage can be identified by the client or server side.
  • the shooting guidance information and shooting conditions that need to be displayed during the specific shooting can be set according to the fixed loss interaction design or the damage processing requirements.
  • the shooting guidance information may include at least one of the following:
  • FIG. 4 An example of a shooting guide is shown in Figure 4.
  • the user can perform the loss processing more conveniently and efficiently through the real-time shooting guidance information.
  • the user can shoot according to the shooting guidance information, and the user experience can be better without professional shooting skills or cumbersome shooting operations.
  • the above embodiment describes the shooting guidance information displayed by the text.
  • the shooting guidance information may further include an image, a voice, an animation, a vibration, and the like, and the current shooting image is aligned by an arrow or a voice prompt.
  • the form of the shooting guidance information displayed in the current shooting window includes at least one of a symbol, a text, a voice, an animation, a video, and a vibration.
  • the user when the user aligns the camera of the mobile device with the vehicle for shooting, the user may shoot at a certain frame rate (eg, 15 frames/s), and then the deep neural network trained as described above may be used. Identify the image. If the damage is detected, a new shooting strategy can be initiated for the damaged area, such as speeding up the shooting frame rate (such as 30 frames/s), and obtaining other parameters to achieve continuous acquisition of the area at a faster speed and lower power consumption. The position in the current shooting window. In this way, the shooting parameters can be adjusted according to different shooting areas, and different shooting strategies can be used to flexibly adapt to different shooting scenes, and focus areas can be enhanced, and the power consumption can be reduced by down-converting corresponding non-key areas. Therefore, in another embodiment of the method provided by the present specification, when it is recognized that there is damage in the current photographing window, the photographing strategy of adjusting at least the parameter including the photographing frame rate is used to photograph the damaged area.
  • the photographing strategy of adjusting at least the parameter including the photographing frame rate
  • the specific shooting strategy can be customized according to the shooting scene.
  • the method may further include:
  • the client application can return the captured damage image to the insurance company for subsequent manual or automatic damage processing. It also avoids or reduces the risk of users falsifying the damage image for fraud. Therefore, in another embodiment of the method provided by the present specification, the method further includes:
  • the fixed loss server may include a server on the insurance company side, and may also include a server of the fixed service party.
  • the transmission to the fixed loss server may be directly transmitted by the client to the fixed loss server, or may be indirectly transmitted to the fixed loss server.
  • the determined qualified loss image can also be sent to the server of the fixed loss server and the fixed service party, such as the server end of the fixed loss service provided by a payment application.
  • the real-time described in the foregoing embodiments may include sending, receiving, or displaying immediately after acquiring or determining certain data information, and those skilled in the art may understand that after buffering or expected calculation, waiting time Sending, receiving, or presenting can still belong to the real-time defined range.
  • the image described in the embodiments of the present specification may include a video, and the video may be regarded as a continuous image collection.
  • the acquired image captured in the solution of the embodiment of the present specification or the qualified loss image that meets the requirements may be stored in a local client or uploaded to a remote server in real time.
  • the local client store performs some data tampering or uploading to the server storage, it can effectively prevent the damaging data from being tampered with, or stealing other insurance data that is not the image of this accident. Therefore, the embodiment of the present specification can also improve the data security of the loss processing and the reliability of the loss determination result.
  • the above embodiment describes an embodiment of a data processing method in which a user performs a vehicle loss on a mobile phone client. It should be noted that the foregoing methods in the embodiments of the present specification may be implemented in various processing devices, such as dedicated loss-making terminals, and implementation scenarios including client and server architectures.
  • FIG. 6 is a hardware structural block diagram of a client that applies the interactive processing of the vehicle loss in the embodiment of the method or apparatus of the present invention.
  • client 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA).
  • processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA).
  • a memory 104 for storing data
  • a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in FIG.
  • the client 10 may also include more or less components than those shown in FIG. 6, for example, may also include other processing hardware, such as a GPU (Graphics Processing Unit), or have the same as shown in FIG. Different configurations.
  • a GPU Graphics Processing Unit
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the search method in the embodiment of the present specification, and the processor 102 executes various functions by running software programs and modules stored in the memory 104.
  • Application and data processing that is, a processing method for realizing the content display of the above navigation interaction interface.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be connected to client 10 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission module 106 is configured to receive or transmit data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transport module 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission module 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • FIG. 7 is a schematic structural diagram of a module of a data processing apparatus for determining a vehicle loss according to the present disclosure.
  • the specific structure may include:
  • the first prompting module 201 can be used to display shooting guidance information of the first damaged area of the photographing vehicle;
  • the damage identification result module 202 may be configured to determine the first damage area of the first damage if it is recognized that the first damage exists in the current shooting window;
  • the display module 203 is configured to: after performing the rendering of the first damage area in a significant manner, using the virtual reality to superimpose and display the rendered first damage area in the current shooting window;
  • the second prompting module 204 can be configured to display shooting guide information for the first damaged area.
  • the foregoing apparatus may further include other implementation manners, such as a rendering processing module that performs rendering, an AR display module that performs AR processing, and the like, according to the description of the related method embodiments.
  • a rendering processing module that performs rendering
  • an AR display module that performs AR processing
  • the device model identification method provided by the embodiment of the present specification may be implemented by a processor executing a corresponding program instruction in a computer, such as using a C++/java language of a Windows/Linux operating system on a PC/server side, or other such as android,
  • the iOS system corresponds to the necessary hardware implementation of the application design language set, or the processing logic based on quantum computers.
  • the data processing device of the vehicle fixed loss provided by the present specification may include a processor and a memory for storing processor executable instructions, where the processor executes When the instruction is implemented:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • the processor further performs:
  • the shooting guidance information of the second damaged area of the vehicle is displayed until the recognized damage shooting is completed.
  • the salient mode rendering includes:
  • the first damaged area is identified by using a preset characterization symbol, and the preset characterization symbol includes one of the following:
  • the salient mode rendering includes:
  • the shooting guidance information includes at least one of the following:
  • the form of the shooting guidance information displayed in the current shooting window includes at least one of a symbol, a text, a voice, an animation, a video, and a vibration. .
  • the processor recognizes that the presence of the first damage in the current shooting window comprises:
  • the damage recognition result returned by the server is received, and the damage recognition result includes a processing result obtained by the damage recognition server using the pre-trained deep neural network to perform damage identification on the acquired image.
  • the processor when the processor recognizes that there is damage in the current shooting window, performing a shooting strategy that adjusts at least a parameter including a shooting frame rate to perform the damage region. Take a picture.
  • the processor further performs:
  • the captured image that meets the requirements for the lossy image acquisition is transmitted to the fixed loss server.
  • processing device described above in the above embodiments may further include other scalable embodiments according to the description of the related method embodiments.
  • the above instructions may be stored in a variety of computer readable storage media.
  • the computer readable storage medium may include physical means for storing information, which may be digitized and stored in a medium utilizing electrical, magnetic or optical means.
  • the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
  • electrical energy such as various types of memories, such as RAM, ROM, etc.
  • magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
  • means for optically storing information such as CD or DVD.
  • quantum memories, graphene memories, and the like are as described above.
  • the above method or apparatus embodiment can be used for a client on the user side, such as a smart phone. Accordingly, the present specification provides a client comprising a processor and a memory for storing processor-executable instructions that, when executed by the processor, are implemented:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • an embodiment of the present specification further provides an electronic device including a display screen, a processor, and a memory storing processor executable instructions.
  • FIG. 8 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
  • the processor executes the instruction, the method steps described in any one of the embodiments may be implemented.
  • embodiments of the present specification refer to the AR technology, the shooting guidance information display, the shooting guidance with the user interaction, the use of the deep neural network to initially identify the damage location, and the like, data acquisition, position alignment, interaction, calculation, judgment, etc.
  • the data is described, however, embodiments of the present description are not limited to situations that must be consistent with industry communication standards, standard image data processing protocols, communication protocols, and standard data models/templates or embodiments of the specification.
  • Certain industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
  • Embodiments obtained by applying such modified or modified data acquisition, storage, judgment, processing, etc. may still fall within the scope of alternative embodiments of the present specification.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
  • the above devices are described as being separately divided into various modules by function.
  • the functions of the modules may be implemented in the same software or software, or the modules that implement the same function may be implemented by multiple sub-modules or a combination of sub-units.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the present description can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • Embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

本说明书实施例公开了一种车辆定损的数据处理方法、装置、处理设备及客户端。用户可以在移动设备上自动识别车辆损伤部位,并在拍摄画面中用容易识别的方式标识出需要拍摄的区域,持续引导用户对该区域拍摄照片或视频,从而让用户可以在不需要专业知识的情况下,也能完成定损所需的符合定损处理要求拍摄,提高车辆定损的处理效率,提高用户定损交互体验。

Description

一种车辆定损的数据处理方法、装置、处理设备及客户端 技术领域
本说明书实施例方案属于计算机终端保险业务数据处理的技术领域,尤其涉及一种车辆定损的数据处理方法、装置、处理设备及客户端。
背景技术
机动车辆保险即汽车保险(或简称车险),是指对机动车辆由于自然灾害或意外事故所造成的人身伤亡或财产损失负赔偿责任的一种商业保险。随着经济的发展,机动车辆的数量不断增加,当前,车险已成为中国财产保险业务中最大的险种之一。
在车险行业,车主发生车辆事故提出理赔申请时,保险公司需要对车辆的损伤程度进行评估,以确定需要修复的项目清单,以及赔付金额等。目前的评估方式主要包括:通过保险公司或第三方公估机构查勘员,对发生事故的车辆进行现场评估,或由用户在保险公司人员的指导下,对事故车辆拍照,通过网络传递给保险公司,再由定损员通过照片进行远程定损。目前的这种车险定损获取定损图像的方式中,保险公司安排车辆及人员到事故现场进行查勘,需要花费较高的成本;车主需要花费较多时间等待查勘人员到达现场,体验较差;车主自行拍摄照片时,由于缺乏经验,往往需要查勘人员通过远程电话或视频通话等方式进行指导,费时费力。即使在查看人员远程指导的情况下,部分案件用这种方式拍出的照片存在大量无效照片,当采集到无效的定损图像时,车主用户需要重新拍摄,甚至已经丧失拍摄时机,严重影响定损处理效率和用户定损服务体验。
因此,业内亟需一种可以更加简易、便捷、快速的车辆定损处理方案。
发明内容
本说明书实施例目的在于提供一种车辆定损的数据处理方法、装置、处理设备及客户端,用户可以在移动设备上自动识别车辆损伤部位,并在拍摄画面中用容易识别的方式标识出需要拍摄的区域,持续引导用户对该区域拍摄照片或视频,从而让用户可以在不需要专业知识的情况下,也能完成定损所需的符合定损处理要求拍摄,提高车辆定损的处理效率,提高用户定损交互体验。
本说明书实施例提供的一种车辆定损的数据处理方法、装置、处理设备及客户端是 包括以下方式实现的:
一种车辆定损的数据处理方法,所述方法包括:
展示拍摄车辆第一受损区域的拍摄引导信息;
若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
展示针对于所述第一损伤区域的拍摄引导信息。
一种车辆定损的数据处理装置,所述装置包括:
第一提示模块,用于展示拍摄车辆第一受损区域的拍摄引导信息;
损伤识别结果模块,用于若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
显著显示模块,用于对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
第二提示模块,用于展示针对于所述第一损伤区域的拍摄引导信息。
一种车辆定损的数据处理设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
展示拍摄车辆第一受损区域的拍摄引导信息;
若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
展示针对于所述第一损伤区域的拍摄引导信息。
一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
展示拍摄车辆第一受损区域的拍摄引导信息;
若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区 域叠加显示在所述当前拍摄视窗中;
展示针对于所述第一损伤区域的拍摄引导信息。
一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现本说明书任意一个实施例所述的方法步骤。
本说明书实施例提供的一种车辆定损的数据处理方法、装置、处理设备及客户端,用户可以在移动设备上自动识别车辆损伤部位,并在拍摄画面中用容易识别的方式标识出需要拍摄的区域,持续引导用户对该区域拍摄照片或视频,从而让用户可以在不需要专业知识的情况下,也能完成定损所需的符合定损处理要求拍摄,提高车辆定损的处理效率,提高用户定损交互体验。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书提供的所述一种车辆定损的数据处理方法实施例的流程示意图;
图2是本说明书所述方法实施例使用的深度神经网络模型示意图;
图3是本说明书提供一种采用小点点符号渲染来标识出损伤区域的示意图;
图4是本说明书提供的所述方法中拍摄引导实施例的实施场景示意图;
图5是本说明书提供的所述方法的另一个实施例的实施场景示意图;
图6是应用本发明方法或装置实施例一种车辆定损的交互处理的客户端的硬件结构框图;
图7是本说明书提供的一种车辆定损的数据处理装置实施例的模块结构示意图;
图8是本说明提供的一种电子设备实施例的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描 述的实施例仅仅是本说明书中的一部分实施例,而不是全部的实施例。基于本说明书中的一个或多个实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书实施例保护的范围。
本说明书提供的一种实施方案可以应用到客户端/服务器的***构架中。所述的客户端可以包括车损现场人员(可以是事故车车主用户,也可以是保险公司人员或进行定损处理的其他人员)使用的具有拍摄功能的终端设备,如智能手机、平板电脑、智能穿戴设备、专用定损终端等。所述的客户端可以具有通信模块,可以与远程的服务器进行通信连接,实现与所述服务器的数据传输。所述服务器可以包括保险公司一侧的服务器或定损服务方一侧的服务器,其他的实施场景中也可以包括其他服务方的服务器,例如与定损服务方的服务器有通信链接的配件供应商的终端、车辆维修厂的终端等。所述的服务器可以包括单台计算机设备,也可以包括多个服务器组成的服务器集群,或者分布式***的服务器。一些应用场景中,客户端一侧可以将现场拍摄采集的图像数据实时发送给服务器,由服务器一侧进行损伤的识别,识别的结果可以反馈给客户端。服务器一侧的处理的实施方案,损伤识别等处理由服务器一侧执行,处理速度通常高于客户端一侧,可以减少客户端处理压力,提高损伤识别速度。当然,本说明书不排除其他的实施例中上述全部或部分处理由客户端一侧实现,如客户端一侧进行损伤的实时检测和识别。
用户自行拍摄车损照片或视频时,常常会面临如下的问题:1、用户并不完全理解哪些损伤部位需要进行拍摄(例如一条剐擦痕迹主要在前门,后门只有少量,被用户忽略;但后门同样需要进行喷漆,因此要对后门损伤部位进行拍摄);2、用户不能识别所有的损伤(例如轻微的凹陷对普通人来说难以用肉眼识别);3、用户很难准确掌握拍摄距离、角度、损伤部位在画面中的比例等要素。为此,本发明提供一种可以应用在移动设备上的车辆定损的数据处理方法,可以在拍摄画面中用容易识别的方式标识出需要拍摄的区域,持续引导用户对该区域拍摄照片或视频,从而让用户可以在不需要专业知识的情况下,也能完成定损所需的拍摄。
下面以一个具体的手机客户端应用场景为例对本说明书实施方案进行说明。具体的,图1是本说明书提供的所述一种车辆定损的数据处理方法实施例的流程示意图。虽然本说明书提供了如下述实施例或附图所示的方法操作步骤或装置结构,但基于常规或者无需创造性的劳动在所述方法或装置中可以包括更多或者部分合并后更少的操作步骤或模块单元。在逻辑性上不存在必要因果关系的步骤或结构中,这些步骤的执行顺序或装置的模块结构不限于本说明书实施例或附图所示的执行顺序或模块结构。所述的方法或 模块结构的在实际中的装置、服务器或终端产品应用时,可以按照实施例或者附图所示的方法或模块结构进行顺序执行或者并行执行(例如并行处理器或者多线程处理的环境、甚至包括分布式处理、服务器集群的实施环境)。当然,下述实施例的描述并不对基于本说明书的其他可扩展到的技术方案构成限制。例如其他的实施场景中。具体的一种实施例如图1所示,本说明书提供的一种车辆定损的数据处理方法的一种实施例中,所述方法可以包括:
S0:展示拍摄车辆第一受损区域的拍摄引导信息;
S2:若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
S4:对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
S6:展示针对于所述第一损伤区域的拍摄引导信息。
本实施例中用户一侧的客户端可以为智能手机,所述的智能手机可以具有拍摄功能。用户可以在车辆事故现场打开实施了本说明书实施方案的手机应用对车辆事故现场进行取景拍摄。客户端打开应用后,可以在客户端显示屏上展示拍摄视窗,通过拍摄视窗获取对车辆进行拍摄。所述的拍摄视窗可以为视频拍摄窗口,可以用于终端对车损现场的取景(图像采集),通过客户端集成的拍摄装置获取的图像信息可以展示在所述拍摄视窗中。所述拍摄视窗具体的界面结构和展示的相关信息可以自定义的设计。
车辆拍摄过程中可以获取车辆的特征数据。所述的特征数据可以根据车辆识别、环境识别、图像识别等数据处理需求进行具体的设置。一般的,所述的特征数据可以包括识别出的车辆的各个部件的数据信息,可用于构建3D坐标信息,建立车辆的增强现实空间模型(AR空间模型,一种数据表征方式,主体的轮廓图形)。当然,所述的特征数据还可以包括其他的例如车辆的品牌、型号、颜色、轮廓、唯一识别码等数据信息。
客户端启用定损服务时,可以展示对受损区域进行拍摄的引导信息。为便于描述,将当前或初始即将拍摄的受损区域称为第一受损区域。例如一个应用实例中,用户在启动定损服务应用时,应用程序可以提示用户在能够看清车辆全貌的距离上,对准车辆可能受损的方位进行拍摄。如有必要可以提示用户围绕车身移动,如初始拍摄时未发现损伤,则提示用户逆时针对车辆全面拍摄。当识别出当前拍摄视窗中存在损伤时(此时可以称为第一损伤),则可以进一步计算确定出损伤对应的损伤区域。
本说明书的一些实施例中,损伤识别的处理可以由客户端一侧实施,也可以由服务器一侧进行处理,此时的服务器可以称为损伤识别服务器。在一些应用场景或计算能力允许的情况下,客户端采集的图像可以直接在客户端本地进行损伤识别,或者以及其他的定损数据处理,可以减少网络传输开销。当然,如前所述,通常服务器一侧的计算能力强于客户端。因此,本说明书提供的所述方法的另一个实施例中,损伤识别的处理可以由服务器一侧进行处理。具体的,所述识别出当前拍摄视窗中存在第一损伤可以包括:
S20:将拍摄获取的采集图像发送至损伤识别服务器;
S22:接收服务器返回的损伤识别结果,所述损伤识别结果包括损伤识别服务器利用预先训练的深度神经网络对所述采集图像进行损伤识别得到的处理结果。
需要说明的,本实施例中所述的识别第一损伤是针对当前次的损伤识别处理,所述的第一并不对其他损伤采集的图像进行损伤识别处理构成限制。
上述实施例中,客户端或服务器一侧可以利用预先或实时训练构建的深度神经网络来识别图像中的损伤,如损伤位置、损伤部件、损伤类型等。
深度神经网络能够用于目标检测及语义分割,对于输入的图片,找到目标在图片中的位置。图2是说明书所述方法实施例使用的深度神经网络模型示意图。图2中描述的为一种比较典型的深度神经网络Faster R-CNN,可以通过事先标注好损伤区域的大量图片,训练出一个深度神经网络,对于车辆各个方位及光照条件的图片,给出损伤区域的范围。另外,本说明书的一些实施例中,可以使用针对移动设备定制的网络结构,如基于典型的MobileNet、SqueezeNet或其改进的网络结构,使得该模型能在移动设备较低功耗、较少内存、较慢处理器的环境下运行,如客户端的移动终端运行环境。
确定出第一损伤区域后,可以对该区域进行显著方式渲染,通过AR技术在拍摄画面中叠加渲染损伤所覆盖的区域。所述的显著方式渲染,主要是指在拍摄画面中使用一些特点的渲染方式标出损伤区域,使得该损伤区域容易识别,或较为突出。本实施例中对具体的渲染方式不做限定,具体的可以设置达到显著方式渲染的约束条件或满足条件。
本说明书提供的所述方法的另一个实施例中,所述的显著方式渲染可以包括:
S40:采用预设表征符号标识出所述第一损伤区域,所述预设表征符号包括下述之一:
圆点、引导线、规则图形框、不规则图形框、自定义的图形。
图3是本说明书提供一种采用小点点符号渲染来标识出损伤区域的示意图。当然,其他的实施方式中,所述的预设表征符号还可以包括其他形式,如引导线、规则图形框、不规则图形框、自定义的图形等,其他的实施例中也可以使用文字、字符、数据等标识出损伤区域,指引用户对损伤区域进行拍摄。渲染时可以使用一种或多种预设表征符号。本实施例中采用预设表征符号来标识出损伤区域,可以在拍摄视窗中更加明显的展示出损伤所在的位置区域,辅助用户快速定位以及引导拍摄。
本说明书提供的所述方法的另一个实施例中,还可以采用动态渲染效果来标识出损伤区域,以更加明显的方式指引用户对损伤区域进行拍摄。具体的,另一个实施例中,所述显著方式渲染包括:
S400:对所述预设表征符号进行颜色变换、大小变换、旋转、跳动中的至少一项动画展示。
本说明书的一些实施例中,可以集合AR叠加现实损伤的边界,提示用户将取景框对准变截面的部分进行拍摄。所述的增强现实AR通常是指一种实时地计算摄影机影像的位置及角度并加上相应图像、视频、3D模型的技术实现方案,这种方案可以在屏幕上把虚拟世界套在现实世界并进行互动。本说明书实施例中利用所述特征数据构建的增强信息空间模型可以为车辆的轮廓信息,具体的可以基于获取的车辆的型号、拍摄角度以及车辆的轮胎位置、顶棚位置、前脸位置、前大灯位置、尾灯位置、前后车窗位置等多个特征数据构建出所述车辆的轮廓。所述的轮廓可以包括基于3D坐标建立的数据模型,所述轮廓中带有相应的3D坐标信息。然后可以将构建的轮廓展示在拍摄视窗中。当然,本说明书不排除其他的实施例中所述的增强现实空间模型还可以包括其他的模型形式或者在所述轮廓之上增加的其他模型信息。
所述的AR模型可以在所述拍摄时长中与真实的车辆位置进行匹配,如将构建的3D轮廓叠加到真实车辆的轮廓位置,当两者完全匹配或匹配程度达到阈值时可以认为完成匹配。具体的匹配处理中,可以通过对取景方向做引导,用户通过引导移动拍摄方向或角度,将构建的轮廓与拍摄的真实车辆的轮廓对准。本说明书实施例结合增强现实技术,不仅展现了用户实际客户端拍摄的车辆真实信息,而且将构建的所述车辆的增强现实空间模型信息同时显示出来,两种信息相互补充、叠加,可以提供更好的定损服务体验。
结合了AR空间模型的拍摄视窗可以更加直观的展示车辆现场情况,可以有效的进行车辆损伤位置的定损和拍摄引导。客户端可以在AR场景下进行损伤识别引导,所述损伤识别引导具体的可以包括将展示基于从所述拍摄视窗中获取的图像信息确定的拍 摄引导信息。客户端可以获取拍摄窗口中AR场景下获取图像信息,可以对获取的图像信息进行分析计算,根据分析结果确定需要在拍摄视窗中展示什么样的拍摄引导信息。例如当前拍摄视窗的中车辆的位置较远,可以在拍摄视窗中提示用户靠近拍摄。若拍摄位置偏左,无法拍摄到车辆尾部,则可以展示拍摄引导信息,提示用户将拍摄角度向右平移。损伤识别引导具体处理的数据信息以及在什么样的条件下展示什么样的拍摄引导信息,可以预先设定相应的策略或规则,本实施例不再逐一描述。
在本实施例中,可以展示针对所述第一损伤区域的拍摄引导信息。具体的可以根据当前的拍摄信息和所述第一损伤区域的位置信息来确定需要展示的拍摄引导信息。例如,若捕捉到车辆后翼子板存在擦痕,而擦痕需要进行正面拍摄和顺着擦痕方向的拍摄,但根据当前拍摄的位置和角度信息计算得到此时用户为斜着的45度拍摄,且距离擦痕位置较远。则此时可以提示用户靠近擦痕位置,提示用户正面和顺着擦痕方向进行拍摄。拍摄引导信息可以根据当前取景实时调整,例如若用户已经靠近擦痕位置,符合拍摄要求,则此时提示用户靠近擦痕位置的拍摄引导信息可以不再展示。所述的疑似损伤可以由客户端或服务器一侧进行识别。
具体的拍摄时需要展示的拍摄引导信息以及拍摄条件等可以根据定损交互设计或者定损处理需求进行相应的设置。本说明书提供的一个实施例中,所述拍摄引导信息可以至少包括下述之一:
调整拍摄方向;
调整拍摄角度;
调整拍摄距离;
调整拍摄光线。
一个拍摄引导的示例如图4所示。用户可以通过实时的拍摄引导信息更加便利、高效的进行定损处理。用户根据拍摄引导信息进行拍摄,可以无需专业的拍摄技能或繁琐的拍摄操作,用户体验更好。上述实施例描述了通过文字展示的拍摄引导信息,可扩展实施例中,所述的拍摄引导信息还可以包括图像、语音、动画、震动等的展现方式,通过箭头或语音提示将当前拍摄画面对准某个区域。因此,所述方法的另一个实施例中,所述拍摄引导信息的在所述当前拍摄视窗展示的形式包括符号、文字、语音、动画、视频、震动中的至少一种。
所述方法的另一个实施例场景中,当用户将移动设备的摄像头对准车辆进行拍摄时, 可以以一定的帧率(如15帧/s)进行拍摄,然后可以使用前述训练的深度神经网络对图像进行识别。若一旦检测出损伤,可以对损伤区域启动新的拍摄策略,如加快拍摄帧率(如30帧/s),获取调整其他参数实现以更快的速度、更低的功耗持续获取该区域在当前拍摄视窗中的位置。这样,可以根据不同的拍摄区域调整拍摄参数,使用不同的拍摄策略,可以灵活适应不同拍摄场景,加强重点区域拍摄,对应非重点区域通过降频还可以降低功耗。因此,本说明书提供的所述方法的另一个实施例中,在识别出当前拍摄视窗中存在损伤时,采用至少调整包括拍摄帧率的参数的拍摄策略来对损伤区域进行拍摄。
当然,还可以调整其他参数,如曝光度、亮度等。具体的拍摄策略可以自定义的根据拍摄场景进行设定。
进一步的,在第一损伤区域采集到足够多(达到定损图像采集要求)的照片或视频后,可以提示引导用户进行下一处损伤的拍摄,直至所有损伤拍摄完毕。这样当用户拍摄某个损伤后,可以持续引导用户进行下一个损伤的拍摄,减少损伤拍摄遗漏,降低用户损伤识别的参与度,提高用户体验。因此,所述的方法的另一个实施例中,如图5所示,还可以包括:
S8:若确定所述第一损伤区域拍摄完成,则展示拍摄所述车辆第二受损区域的拍摄引导信息,直至识别出的损伤拍摄完成。
客户端应用程序可以将拍摄到的损伤画面回传保险公司,以便进行后续的人工或自动定损处理。还可以避免或降低用户伪造定损图像进行骗保的风险。因此,本说明书提供的所述方法的另一个实施例中,所述方法还包括:
S10:将拍摄的符合定损图像采集要求的图像传输至定损服务器。
所述定损服务器可以包括保险公司一侧的服务器,也可以包括定损服务方的服务器。所述的传输至定损服务器可以包括由客户端直接传输给定损服务器,也可以间接传输到定损服务器。当然,确定的符合要求的定损图像也可以同时发给定损服务器和定损服务方的服务器,如某支付应用提供的定损服务的服务器端。
需要说明的,上述实施例中所描述的实时可以包括在获取或确定某个数据信息后即刻发送、接收或展示,本领域技术人员可以理解的是,经过缓存或预期的计算、等待时间后的发送、接收或展示仍然可以属于所述实时的定义范围。本说明书实施例所述的图像可以包括视频,视频可以视为连续的图像集合。
另外,本说明书实施例方案中拍摄获取的采集图像或符合要求的定损图像可以存储到本地客户端或实时上传给远端服务器。本地客户端存储进行一些数据防篡改或上传至服务器存储后,可以有效防止定损数据被篡改,或盗用其他非本次事故图像的数据进行的保险欺诈。因此,本说明书实施例还可以提高定损处理的数据安全性和定损结果的可靠性。
上述实施例描述了用户在手机客户端进行车辆定损的数据处理方法实施方式。需要说明的是,本说明书实施例上述所述的方法可以在多种处理设备中,如专用定损终端,以及包括客户端与服务器架构的实施场景中。
本说明书中上述方法的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。相关之处参见方法实施例的部分说明即可。
本申请实施例所提供的方法实施例可以在移动终端、PC终端、专用定损终端、服务器或者类似的运算装置中执行。以运行在移动终端上为例,图6是应用本发明方法或装置实施例一种车辆定损的交互处理的客户端的硬件结构框图。如图6所示,客户端10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输模块106。本领域普通技术人员可以理解,图6所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,客户端10还可包括比图6中所示更多或者更少的组件,例如还可以包括其他的处理硬件,如GPU(Graphics Processing Unit,图像处理器),或者具有与图8所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,如本说明书实施例中的搜索方法对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述导航交互界面内容展示的处理方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至客户端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输模块106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输模块106包括一个网 络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输模块106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
基于上述所述的图像物体定位的方法,本说明书还提供一种车辆定损的数据处理装置。所述的装置可以包括使用了本说明书实施例所述方法的***(包括分布式***)、软件(应用)、模块、组件、服务器、客户端等并结合必要的实施硬件的设备装置。基于同一创新构思,本说明书提供的一种实施例中的处理装置如下面的实施例所述。由于装置解决问题的实现方案与方法相似,因此本说明书实施例具体的处理装置的实施可以参见前述方法的实施,重复之处不再赘述。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。具体的,如图7所示,图7是本说明书提供的一种车辆定损的数据处理装置实施例的模块结构示意图,具体的可以包括:
第一提示模块201,可以用于展示拍摄车辆第一受损区域的拍摄引导信息;
损伤识别结果模块202,可以用于若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
显著显示模块203,可以用于对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
第二提示模块204,可以用于展示针对于所述第一损伤区域的拍摄引导信息。
需要说明的是,上述实施例上述所述的装置,根据相关方法实施例的描述还可以包括其他的实施方式,如执行渲染的渲染处理模块、进行AR处理的AR显示模块等。具体的实现方式可以参照方法实施例的描述,在此不作一一赘述。
本说明书实施例提供的设备型号识别方法可以在计算机中由处理器执行相应的程序指令来实现,如使用windows/Linux操作***的c++/java语言在PC端/服务器端实现,或其他例如android、iOS***相对应的应用设计语言集合必要的硬件实现,或者基于量子计算机的处理逻辑实现等。具体的,本说明书提供的一种车辆定损的数据处理设备实现上述方法的实施例中,所述处理设备可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
展示拍摄车辆第一受损区域的拍摄引导信息;
若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
展示针对于所述第一损伤区域的拍摄引导信息。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述处理器还执行:
若确定所述第一损伤区域拍摄完成,则展示拍摄所述车辆第二受损区域的拍摄引导信息,直至识别出的损伤拍摄完成。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述显著方式渲染包括:
采用预设表征符号标识出所述第一损伤区域,所述预设表征符号包括下述之一:
圆点、引导线、规则图形框、不规则图形框、自定义的图形。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述显著方式渲染包括:
对所述预设表征符号进行颜色变换、大小变换、旋转、跳动中的至少一项动画展示。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述拍摄引导信息至少包括下述之一:
调整拍摄方向;
调整拍摄角度;
调整拍摄距离;
调整拍摄光线。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述拍摄引导信息的在所述当前拍摄视窗展示的形式包括符号、文字、语音、动画、视频、震动中的至少一种。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述处理器识别出当前拍摄视窗中存在第一损伤包括:
将拍摄获取的采集图像发送至损伤识别服务器;
接收服务器返回的损伤识别结果,所述损伤识别结果包括损伤识别服务器利用 预先训练的深度神经网络对所述采集图像进行损伤识别得到的处理结果。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述处理器在识别出当前拍摄视窗中存在损伤时,执行采用至少调整包括拍摄帧率的参数的拍摄策略来对损伤区域进行拍摄。
基于前述方法实施例描述,所述处理设备的另一个实施例中,所述处理器还执行:
将拍摄的符合定损图像采集要求的图像传输至定损服务器。
需要说明的是,上述实施例上述所述的处理设备,根据相关方法实施例的描述还可以包括其他的可扩展实施方式。具体的实现方式可以参照方法实施例的描述,在此不作一一赘述。
上述的指令可以存储在多种计算机可读存储介质中。所述计算机可读存储介质可以包括用于存储信息的物理装置,可以将信息数字化后再以利用电、磁或者光学等方式的媒体加以存储。本实施例所述的计算机可读存储介质有可以包括:利用电能方式存储信息的装置如,各式存储器,如RAM、ROM等;利用磁能方式存储信息的装置如,硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘;利用光学方式存储信息的装置如,CD或DVD。当然,还有其他方式的可读存储介质,例如量子存储器、石墨烯存储器等等。本说明书实施例中所述的装置或服务器或客户端或***中的指令同上描述。
上述方法或装置实施例可以用于用户一侧的客户端,如智能手机。因此,本说明书提供一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
展示拍摄车辆第一受损区域的拍摄引导信息;
若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
展示针对于所述第一损伤区域的拍摄引导信息。
基于前述所述,本说明书实施例还提供一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器。
图8是本说明提供的一种电子设备实施例的结构示意图,所述处理器执行所述指令时可以实现本说明书任意一个实施例所述的方法步骤。
本说明书所述的装置、客户端、电子设备等的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于硬件+程序类实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
虽然本申请提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的劳动可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或客户端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)。
尽管本说明书实施例内容中提到AR技术、拍摄引导信息展示、与用户交互的拍摄引导、利用深度神经网络初步识别损伤位置等之类的数据获取、位置排列、交互、计算、判断等操作和数据描述,但是,本说明书实施例并不局限于必须是符合行业通信标准、标准图像数据处理协议、通信协议和标准数据模型/模板或本说明书实施例所描述的情况。某些行业标准或者使用自定义方式或实施例描述的实施基础上略加修改后的实施方案也可以实现上述实施例相同、等同或相近、或变形后可预料的实施效果。应用这些修改或变形后的数据获取、存储、判断、处理方式等获取的实施例,仍然可以属于本说明书的可选实施方案范围之内。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field  Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字***“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的***、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本说明书实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本说明书实施例时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内部包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本说明书的实施例可提供为方法、***或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例 程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于***实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书实施例的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本说明书实施例的实施例而已,并不用于限制本说明书实施例。对于本领域技术人员来说,本说明书实施例可以有各种更改和变化。凡在本说明书实施例的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书实施例的权利要求范围之内。

Claims (21)

  1. 一种车辆定损的数据处理方法,所述方法包括:
    展示拍摄车辆第一受损区域的拍摄引导信息;
    若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
    对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
    展示针对于所述第一损伤区域的拍摄引导信息。
  2. 如权利要求1所述的方法,所述方法还包括:
    若确定所述第一损伤区域拍摄完成,则展示拍摄所述车辆第二受损区域的拍摄引导信息,直至识别出的损伤拍摄完成。
  3. 如权利要求1所述的方法,所述显著方式渲染包括:
    采用预设表征符号标识出所述第一损伤区域,所述预设表征符号包括下述之一:
    圆点、引导线、规则图形框、不规则图形框、自定义的图形。
  4. 如权利要求3所述的方法,所述显著方式渲染包括:
    对所述预设表征符号进行颜色变换、大小变换、旋转、跳动中的至少一项动画展示。
  5. 如权利要求1所述的方法,所述拍摄引导信息至少包括下述之一:
    调整拍摄方向;
    调整拍摄角度;
    调整拍摄距离;
    调整拍摄光线。
  6. 如权利要求1所述的方法,所述拍摄引导信息的在所述当前拍摄视窗展示的形式包括符号、文字、语音、动画、视频、震动中的至少一种。
  7. 如权利要求1所述的方法,在识别出当前拍摄视窗中存在损伤时,采用至少调整包括拍摄帧率的参数的拍摄策略来对损伤区域进行拍摄。
  8. 如权利要求1所述的方法,所述识别出当前拍摄视窗中存在第一损伤包括:
    将拍摄获取的采集图像发送至损伤识别服务器;
    接收服务器返回的损伤识别结果,所述损伤识别结果包括损伤识别服务器利用预先训练的深度神经网络对所述采集图像进行损伤识别得到的处理结果。
  9. 如权利要求1所述的方法,所述方法还包括:
    将拍摄的符合定损图像采集要求的图像传输至定损服务器。
  10. 一种车辆定损的数据处理装置,所述装置包括:
    第一提示模块,用于展示拍摄车辆第一受损区域的拍摄引导信息;
    损伤识别结果模块,用于若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
    显著显示模块,用于对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
    第二提示模块,用于展示针对于所述第一损伤区域的拍摄引导信息。
  11. 一种车辆定损的数据处理设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    展示拍摄车辆第一受损区域的拍摄引导信息;
    若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
    对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
    展示针对于所述第一损伤区域的拍摄引导信息。
  12. 如权利要求11所述的处理设备,所述处理器还执行:
    若确定所述第一损伤区域拍摄完成,则展示拍摄所述车辆第二受损区域的拍摄引导信息,直至识别出的损伤拍摄完成。
  13. 如权利要求11所述的处理设备,所述显著方式渲染包括:
    采用预设表征符号标识出所述第一损伤区域,所述预设表征符号包括下述之一:
    圆点、引导线、规则图形框、不规则图形框、自定义的图形。
  14. 如权利要求13所述的处理设备,所述显著方式渲染包括:
    对所述预设表征符号进行颜色变换、大小变换、旋转、跳动中的至少一项动画展示。
  15. 如权利要求11所述的处理设备,所述拍摄引导信息至少包括下述之一:
    调整拍摄方向;
    调整拍摄角度;
    调整拍摄距离;
    调整拍摄光线。
  16. 如权利要求11所述的处理设备,所述拍摄引导信息的在所述当前拍摄视窗展示的形式包括符号、文字、语音、动画、视频、震动中的至少一种。
  17. 如权利要求11所述的处理设备,所述处理器识别出当前拍摄视窗中存在第一损伤包括:
    将拍摄获取的采集图像发送至损伤识别服务器;
    接收服务器返回的损伤识别结果,所述损伤识别结果包括损伤识别服务器利用预先训练的深度神经网络对所述采集图像进行损伤识别得到的处理结果。
  18. 如权利要求11所述的处理设备,所述处理器在识别出当前拍摄视窗中存在损伤时,执行采用至少调整包括拍摄帧率的参数的拍摄策略来对损伤区域进行拍摄。
  19. 如权利要求11所述的处理设备,所述处理器还执行:
    将拍摄的符合定损图像采集要求的图像传输至定损服务器。
  20. 一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    展示拍摄车辆第一受损区域的拍摄引导信息;
    若识别出当前拍摄视窗中存在第一损伤,则确定出所述第一损伤的第一损伤区域;
    对所述第一损伤区域进行显著方式渲染后,利用虚拟现实在将渲染后的第一损伤区域叠加显示在所述当前拍摄视窗中;
    展示针对于所述第一损伤区域的拍摄引导信息。
  21. 一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现权利要求1-9中任意一项所述的方法步骤。
PCT/CN2019/076028 2018-05-08 2019-02-25 一种车辆定损的数据处理方法、装置、处理设备及客户端 WO2019214319A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810432696.3 2018-05-08
CN201810432696.3A CN108632530B (zh) 2018-05-08 2018-05-08 一种车辆定损的数据处理方法、装置、设备及客户端、电子设备

Publications (1)

Publication Number Publication Date
WO2019214319A1 true WO2019214319A1 (zh) 2019-11-14

Family

ID=63695894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076028 WO2019214319A1 (zh) 2018-05-08 2019-02-25 一种车辆定损的数据处理方法、装置、处理设备及客户端

Country Status (3)

Country Link
CN (2) CN113179368B (zh)
TW (1) TW201947452A (zh)
WO (1) WO2019214319A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3869404A3 (en) * 2020-12-25 2022-01-26 Beijing Baidu Netcom Science And Technology Co. Ltd. Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
CN115174885A (zh) * 2022-06-28 2022-10-11 深圳数位大数据科技有限公司 基于ar终端的线下场景信息采集方法、平台、***及介质
EP4070251A4 (en) * 2019-12-02 2023-08-30 Click-Ins, Ltd. SYSTEMS, METHODS AND PROGRAMS FOR GENERATING A DAMAGE IMPRINT IN A VEHICLE
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及***

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3403146A4 (en) * 2016-01-15 2019-08-21 iRobot Corporation AUTONOMOUS MONITORING ROBOT SYSTEMS
CN113179368B (zh) * 2018-05-08 2023-10-27 创新先进技术有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN109447171A (zh) * 2018-11-05 2019-03-08 电子科技大学 一种基于深度学习的车辆姿态分类方法
CN109740547A (zh) * 2019-01-04 2019-05-10 平安科技(深圳)有限公司 一种图像处理方法、设备及计算机可读存储介质
CN110245552B (zh) * 2019-04-29 2023-07-18 创新先进技术有限公司 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110427810B (zh) * 2019-06-21 2023-05-30 北京百度网讯科技有限公司 视频定损方法、装置、拍摄端及机器可读存储介质
CN110659567B (zh) * 2019-08-15 2023-01-10 创新先进技术有限公司 车辆损伤部位的识别方法以及装置
CN110650292B (zh) * 2019-10-30 2021-03-02 支付宝(杭州)信息技术有限公司 辅助用户拍摄车辆视频的方法及装置
CN111489433B (zh) * 2020-02-13 2023-04-25 北京百度网讯科技有限公司 车辆损伤定位的方法、装置、电子设备以及可读存储介质
CN111368752B (zh) * 2020-03-06 2023-06-02 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN111475157B (zh) * 2020-03-16 2024-04-19 中保车服科技服务股份有限公司 一种图像采集模板管理方法、装置、存储介质及平台
CN111340974A (zh) * 2020-04-03 2020-06-26 北京首汽智行科技有限公司 一种记录共享汽车车辆损坏部位的方法
CN112492105B (zh) * 2020-11-26 2022-04-15 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及***
CN113033372B (zh) * 2021-03-19 2023-08-18 北京百度网讯科技有限公司 车辆定损方法、装置、电子设备及计算机可读存储介质
CN113486725A (zh) * 2021-06-11 2021-10-08 爱保科技有限公司 智能车辆定损方法及装置、存储介质及电子设备
CN113256778B (zh) * 2021-07-05 2021-10-12 爱保科技有限公司 生成车辆外观部件识别样本的方法、装置、介质及服务器
KR102366017B1 (ko) * 2021-07-07 2022-02-23 쿠팡 주식회사 설치 서비스를 위한 정보 제공 방법 및 장치
CN113840085A (zh) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 车源信息的采集方法、装置、电子设备及可读介质
CN113866167A (zh) * 2021-09-13 2021-12-31 北京逸驰科技有限公司 一种轮胎检测结果的生成方法、计算机设备及存储介质
CN114245055B (zh) * 2021-12-08 2024-04-26 深圳位置网科技有限公司 一种用于紧急呼叫情况下视频通话的方法和***
CN114637438B (zh) * 2022-03-23 2024-05-07 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160050364A1 (en) * 2014-08-18 2016-02-18 Audatex North America, Inc. System for capturing an image of a damaged vehicle
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及***
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
US20180082378A1 (en) * 2016-09-21 2018-03-22 Allstate Insurance Company Enhanced Image Capture and Analysis of Damaged Tangible Objects
CN108632530A (zh) * 2018-05-08 2018-10-09 阿里巴巴集团控股有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748216B2 (en) * 2013-10-15 2020-08-18 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
CN107360365A (zh) * 2017-06-30 2017-11-17 盯盯拍(深圳)技术股份有限公司 拍摄方法、拍摄装置、终端以及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
US20160050364A1 (en) * 2014-08-18 2016-02-18 Audatex North America, Inc. System for capturing an image of a damaged vehicle
US20180082378A1 (en) * 2016-09-21 2018-03-22 Allstate Insurance Company Enhanced Image Capture and Analysis of Damaged Tangible Objects
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及***
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN108632530A (zh) * 2018-05-08 2018-10-09 阿里巴巴集团控股有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4070251A4 (en) * 2019-12-02 2023-08-30 Click-Ins, Ltd. SYSTEMS, METHODS AND PROGRAMS FOR GENERATING A DAMAGE IMPRINT IN A VEHICLE
EP3869404A3 (en) * 2020-12-25 2022-01-26 Beijing Baidu Netcom Science And Technology Co. Ltd. Vehicle loss assessment method executed by mobile terminal, device, mobile terminal and medium
CN115174885A (zh) * 2022-06-28 2022-10-11 深圳数位大数据科技有限公司 基于ar终端的线下场景信息采集方法、平台、***及介质
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及***
CN117455466B (zh) * 2023-12-22 2024-03-08 南京三百云信息科技有限公司 一种汽车远程评估的方法及***

Also Published As

Publication number Publication date
CN108632530B (zh) 2021-02-23
CN113179368A (zh) 2021-07-27
CN108632530A (zh) 2018-10-09
TW201947452A (zh) 2019-12-16
CN113179368B (zh) 2023-10-27

Similar Documents

Publication Publication Date Title
WO2019214319A1 (zh) 一种车辆定损的数据处理方法、装置、处理设备及客户端
WO2019214313A1 (zh) 一种车辆定损的交互处理方法、装置、处理设备及客户端
WO2019214320A1 (zh) 车辆损伤识别的处理方法、处理设备、客户端及服务器
WO2019109730A1 (zh) 识别对象损伤的处理方法、装置、服务器、客户端
TWI759647B (zh) 影像處理方法、電子設備,和電腦可讀儲存介質
WO2019214321A1 (zh) 车辆损伤识别的处理方法、处理设备、客户端及服务器
CN110245552B (zh) 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110059623B (zh) 用于生成信息的方法和装置
CN110910628B (zh) 车损图像拍摄的交互处理方法、装置、电子设备
CN110349161B (zh) 图像分割方法、装置、电子设备、及存储介质
CN114267041B (zh) 场景中对象的识别方法及装置
WO2019062631A1 (zh) 一种局部动态影像生成方法及装置
CN111382647B (zh) 一种图片处理方法、装置、设备及存储介质
CN111310815A (zh) 图像识别方法、装置、电子设备及存储介质
CN111160312A (zh) 目标识别方法、装置和电子设备
JP2023526899A (ja) 画像修復モデルを生成するための方法、デバイス、媒体及びプログラム製品
CN111325107A (zh) 检测模型训练方法、装置、电子设备和可读存储介质
KR20220004606A (ko) 신호등 식별 방법, 장치, 기기, 저장 매체 및 컴퓨터 프로그램
EP4303815A1 (en) Image processing method, electronic device, storage medium, and program product
WO2023155350A1 (zh) 一种人群定位方法及装置、电子设备和存储介质
CN110177216A (zh) 图像处理方法、装置、移动终端以及存储介质
CN110807728B (zh) 对象的显示方法、装置、电子设备及计算机可读存储介质
CN110263721B (zh) 车灯设置方法及设备
CN110609877B (zh) 一种图片采集的方法、装置、设备和计算机存储介质
WO2021214540A1 (en) Robust camera localization based on a single color component image and multi-modal learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800030

Country of ref document: EP

Kind code of ref document: A1