CN109522825A - The Performance Test System and its performance test methods of visual perception system - Google Patents

The Performance Test System and its performance test methods of visual perception system Download PDF

Info

Publication number
CN109522825A
CN109522825A CN201811285542.2A CN201811285542A CN109522825A CN 109522825 A CN109522825 A CN 109522825A CN 201811285542 A CN201811285542 A CN 201811285542A CN 109522825 A CN109522825 A CN 109522825A
Authority
CN
China
Prior art keywords
target
performance test
visual perception
video data
data source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811285542.2A
Other languages
Chinese (zh)
Inventor
陈炯
蔡云跃
高祥龙
孙鹏
杨洋
梁高铭
章健勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NIO Holding Co Ltd
Original Assignee
NIO Nextev Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NIO Nextev Ltd filed Critical NIO Nextev Ltd
Priority to CN201811285542.2A priority Critical patent/CN109522825A/en
Publication of CN109522825A publication Critical patent/CN109522825A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to the Performance Test Systems and its performance test methods of a kind of visual perception system.Performance Test System of the invention comprising: memory module is used to provide the video data source with original markup information;Video injection unit is used to injecting the video data source into the visual perception system to be tested offline;And comparative analysis unit, it is used to for the original markup information in the sensing results and the video data source of the visual perception system being compared analysis to obtain the performance evaluation result of the visual perception system, wherein, the sensing results are that the visual perception system carries out the sensing results exported after image perception processing for the video data source.The present invention, which can be realized, is tested for the property visual perception system off-line, also, good to the consistency of the performance test of different visual perception systems, can export to obtain quantitative performance evaluation result.

Description

The Performance Test System and its performance test methods of visual perception system
Technical field
The invention belongs to video-aware the field of test technology, are related to a kind of video data source of the use with original markup information Visual perception system Performance Test System and its performance test methods.
Background technique
Environment sensing is one of key technology, the environment sensing namely visual impression of view-based access control model in fields such as automatic Pilots Know, is an important technology route of environment sensing.For example, in the AutoPilot system of tesla, the vehicle of forward direction, pedestrian, The detection and identification of the targets such as lane line are mainly based upon visual perception technology, the EQ family chip of MobileEye company publication It is also by visual perception algorithm integration into chip, to be supplied to OEM and automated driving system developer.
Due to the development of depth learning technology, the discrimination of computer vision technique is obviously improved, this is but also view Feel technology or visual perception system being capable of the available actual applications in the environment sensing of automatic Pilot.But due to view Among feeling that algorithm is also evolving, visual perception system is that needs are tested for the property or performance is commented before practical application Estimate.
Industry is in the performance test of visual perception system at present, on the one hand, is carried out dependent on for example by entrucking practical Drive test to obtain video data online and output phase answers sensing results, thus online to sensing results carry out Performance Evaluation or Evaluation;On the other hand, qualitatively assessment or the evaluation phase of the Performance Evaluation test to visual perception system, difference view are rested on Feel that the consistency of the performance test of sensory perceptual system is poor.
Summary of the invention
It is an object of the present invention to realization is offline tested for the property visual perception system.
Another object of the present invention is to improve the consistency of the performance test to different visual perception systems.
To realize that object above or other purposes, the present invention provide following technical scheme.
According to the one side of the disclosure, a kind of Performance Test System of visual perception system is provided comprising:
Memory module is used to provide the video data source with original markup information;
Video injection unit is used to injecting the video data source into the visual perception system to be tested offline;With And
Comparative analysis unit is used for the original mark in the sensing results and the video data source of the visual perception system Note information is compared analysis to obtain the performance evaluation result of the visual perception system, wherein the sensing results are institutes It states visual perception system and carries out the sensing results exported after image perception processing for the video data source.
According to the Performance Test System of one embodiment of the disclosure, wherein the original markup information be include target and its The markup information of objective attribute target attribute.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein the target includes One or more of: vehicle, pedestrian, traffic lights, traffic mark, lane line;
The objective attribute target attribute includes one or more of: target whether there is, the affiliated type of target, target sizes, target away from From, target relative to the relative velocity of vehicle, the coefficient of polynomial fitting of lane line, the specifying information of traffic lights, traffic The specifying information of mark.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein the comparative analysis Unit includes:
As a result conversion module is used to for the sensing results being standardized according to the form of the correspondence original markup information Conversion is to obtain standardized sensing results;
Comparison module is used to for the standardized sensing results being compared to the original markup information corresponding to obtain Comparison result;
Statistical module is used to quantify comparison result described in geo-statistic to obtain the quantitative performance of the video-aware system and comment Estimate result.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein the statistical module The statistic counted includes following one or more:
Target identification fan-out capability,
Overall goal recognition accuracy,
Overall goal identifies recall rate,
The recognition accuracy of class object,
The recognition accuracy of class object attribute.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein the video injection Unit includes:
Video format conversion module is used to be converted to the video data source and the visual perception system to be tested Input the format to match.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein the video injection The output interface of unit is configured as acquiescently exporting the video counts in such a way that MIPI CSI-2 or parallel DVP turn LVDS According to source.
According to the Performance Test System of another embodiment of the disclosure or any of the above embodiment, wherein further include:
Display unit is used to show the performance evaluation result.
Another embodiment or any of the above embodiment according to the present invention
According to the another aspect of the disclosure, a kind of performance test methods of visual perception system are provided, comprising steps of
The video data source of the original markup information of storage tape is provided;
The video data source is injected to the visual perception system to be tested offline;And
Original markup information in the sensing results and the video data source of the visual perception system is compared analysis To obtain the performance evaluation result of the visual perception system, wherein the sensing results are that the visual perception system is directed to The video data source carries out the sensing results exported after image perception processing.
According to the performance test methods of one embodiment of the disclosure, wherein the original markup information be include target and its The markup information of objective attribute target attribute.
According to the performance test methods of another embodiment of the disclosure or any of the above embodiment, wherein the target includes One or more of: vehicle, pedestrian, traffic lights, traffic mark, lane line;
The objective attribute target attribute includes one or more of: target whether there is, the affiliated type of target, target sizes, target away from From, target relative to the relative velocity of vehicle, the coefficient of polynomial fitting of lane line, the specifying information of traffic lights, traffic The specifying information of mark.
According to the performance test methods of another embodiment of the disclosure or any of the above embodiment, wherein the comparative analysis Step includes:
By the sensing results according to the form of the correspondence original markup information be standardized conversion it is standardized to obtain Sensing results;
The standardized sensing results are compared with the original markup information;
Comparison result described in quantitative geo-statistic is to obtain the quantitative performance evaluation result of the video-aware system.
According to the performance test methods of another embodiment of the disclosure or any of the above embodiment, wherein the statistics counted Amount includes following one or more:
Target identification fan-out capability,
Overall goal recognition accuracy,
Overall goal identifies recall rate,
The recognition accuracy of class object,
The recognition accuracy of class object attribute.
According to the performance test methods of another embodiment of the disclosure or any of the above embodiment, wherein the injection step Include:
The video data source is converted to the format to match with the input of the visual perception system to be tested.
According to the performance test methods of another embodiment of the disclosure or any of the above embodiment, wherein walked in the injection In rapid, acquiescently to export the video data source in such a way that MIPI CSI-2 or parallel DVP turn LVDS.
It will become apparent according to the following description and drawings features above of the invention and operation.
Detailed description of the invention
From the following detailed description in conjunction with attached drawing, it will keep above and other purpose and advantage of the invention more complete It is clear, wherein the same or similar element, which is adopted, to be indicated by the same numeral.
Fig. 1 is the structural schematic diagram according to the Performance Test System of the visual perception system of one embodiment of the invention.
Fig. 2 is the flow chart according to the performance test methods of the visual perception system of one embodiment of the invention.
Specific embodiment
For succinct and illustrative purpose, this paper Primary Reference its example embodiment describes the principle of the present invention.But Those skilled in the art will readily recognize that identical principle can be equally applied to it is all types of for visual perception system The Performance Test System and/or performance test methods of system, and these the same or similar principles can be implemented within, it is any True spirit and range of such variation without departing substantially from present patent application.Moreover, in the following description, with reference to attached drawing, these are attached Figure illustrates specific example embodiment.These embodiments can be carried out under the premise of without departing substantially from the spirit and scope of the present invention Change in electricity, logic and structure.Although in addition, the invention is characterized in that in conjunction with several implementation/embodiments only one of them Come it is disclosed, but as may be expectation for any given or identifiable function and/or advantageous, can by this feature with Other features of other implementation/embodiments one or more are combined.Therefore, it is described below and is not construed as in a limiting sense , and the scope of the present invention is defined by appended claims and its equivalent.
Visual perception system can sense the environment locating for it by video mode, and following example is with video sensory system Applied to being what example was illustrated on the vehicle with automated driving system, road environment locating for vehicle etc. can be perceived To help to realize Function for Automatic Pilot.It will be understood that visual perception system is not limited to feel using applying in automated driving system The environmental objects of survey are also not restrictive.
Herein, " offline " refers to the environment that visual perception system disengaging to be tested is sensed in it, conversely, " Line " refers to that visual perception system to be tested works in the environment of sensing, such as shooting obtains video data etc..It is existing In technology, the performance test of visual perception system needs entrucking and carries out practical drive test, this is a kind of typical on-line testing side Formula;The problem of on-line testing mode is clearly present inefficiency, also, shot in on-line testing the video data of acquisition also it is each not It is identical, it is difficult to obtain the result of consistency.
Fig. 1 show the structural schematic diagram of the Performance Test System of the visual perception system according to one embodiment of the invention.
As shown in Figure 1, visual perception system 90 to be tested can be applied in the automated driving system of vehicle, Its video data that corresponding road environment is perceived configured with corresponding vision algorithm.The concrete type of visual perception system 90 is not It is restrictive.It will be understood that the Performance Test System 10 of the embodiment of the present invention can be used for different visual perception systems 90 It is tested, to obtain corresponding performance evaluation result.
Performance Test System 10 can be realized for example, by computer equipment, with memory module 110, can store There is the video data source 100 with original markup information 101, so as to provide video data source 100.Memory module 110 can be with The different video data source 100 of multistage is stored, they can be applied to same visual perception system 90 or different visual perceptions system The performance test of system 90.Memory module 110 can specifically be realized for example, by memories such as hard disks, corresponding for being provided with Interface (such as USB interface), it is thus possible to be that memory module 110 loads or update corresponding video data source 100 from outside.
In one embodiment, the original markup information 101 in video data source 100 be include target and its objective attribute target attribute Markup information, the markup information can be realized by target level markup information and/or Pixel-level markup information.It is answered with automatic Pilot With for example, target may include one or more of: vehicle, pedestrian are (for example including bicycle, electric vehicle and motorcycle Deng), traffic lights, traffic mark, lane line;Objective attribute target attribute includes one or more of: target whether there is, target institute Belong to type, target sizes, target range, target relative to the relative velocity of vehicle, the coefficient of polynomial fitting of lane line, traffic The specifying information of the specifying information of signal lamp, traffic mark.It will be understood that when visual perception system 90 is applied in not homologous ray When, the target and/or objective attribute target attribute of the original markup information 101 of video data source 100 can also correspondingly change.
It will be understood that the original markup information 101 in video data source 100 can be considered as correct or standard video-aware knot Fruit is attached in video data source 100 by way of marking or demarcating in advance and can be told to performance test system System 10, for example, comparative analysis unit 130 can be supplied to.It of course further include except original markup information in video data source 100 Other information except 101, such as picture frame contents, image Frame Properties (such as time) etc..
In one embodiment, video injection unit 120 is provided in Performance Test System 10, video injection unit 120 can Video data source 100 to be injected to external visual perception system 90 to be tested offline, to be supplied to visual perception system System 90 carries out image perception processing, for example, being carried out at image perception by the vision algorithm configured in visual perception system 90 Reason.Correspondingly, visual perception system 90 can export corresponding sensing results 91.It will be understood that as needed, injecting visual perception The video data source 100 of system 90 can not include the original markup information of its band.
Specifically, video format conversion module 121, video format modulus of conversion can be set in video injection unit 120 Video data source 100 can be converted to the format that the input with visual perception system 90 matches by block 121.Different visual impressions Know that system 90 has different input characteristics, and only support the video data of corresponding format, passes through video format conversion module 121, video data source 100 is converted to the video data of different-format, so as to the performance of compatible visual sensory perceptual system 90 Test.Therefore, video format conversion module 121 can provide the video counts of its required format for different visual perception systems 90 According to.
It may include corresponding Video Codec in video format conversion module 121, for example, formatting it Before the video data source 100 of compression the processing such as can be unziped it to.
Specifically, video format conversion module 121 can be by hardware realizations such as FPGA, and memory module 110 specifically can be with It is connected in a manner of PCI-e with the FPGA in video format conversion module 121.The specific implementation of video format conversion module 121 Mode is not limiting.
Since visual perception system 90 is usually that (CSI is camera serial line interface " Camera Serial to configuration CSI-2 interface The abbreviation of Interface ") or parallel DVP(Digital Video Port, digital video frequency end) camera, correspondingly, The generally compatible LVDS(Low Voltage Differential Signaling of the input characteristics of visual perception system 9, low voltage difference Sub-signal) interface;Therefore, in one embodiment, the default of video format conversion module 121 is movement with MIPI CSI-2(MIPI Generate the abbreviation of processor interface alliance " Mobile Industry Processor Interface ") mode or parallel DVP Mode output data source, the output interface of video injection unit 120 are configured as acquiescently with MIPI CSI-2 or parallel DVP The mode for turning LVDS exports video data source.Illustratively, the output end of video format conversion module 121 configures multiple serializers 122, for example, three parallel arrangement of serializer 122a, 122b and 122c, they are defeated for realizing video injection unit 120 Outgoing interface, so that the output interface is acquiescently exported in a manner of LVDS.
Continue as shown in Figure 1, be additionally provided with comparative analysis unit 130 in Performance Test System 10, comparative analysis unit 130 It can be coupled respectively with the output end of visual perception system 90 and memory module 110, comparative analysis unit 130 can be by visual impression Know that the original markup information 101 in sensing results 91 and video data source 100 that system 90 is exported is compared analysis to obtain Obtain the performance evaluation result of visual perception system 90, wherein sensing results 91 are visual perception systems 90 for video data source The sensing results exported after 100 progress image perception processing can distinguish such as different time to facilitate with timestamp Sensing results corresponding to the picture frame of point.
It in one embodiment, may include result conversion module 131, comparison module 132 and system in comparative analysis unit 130 Count module 133.
As a result sensing results 91 can be carried out standard according to the form of the original markup information 101 of correspondence by conversion module 131 Change conversion to obtain standardized sensing results.Since the sensing results 91 that visual perception system 90 exports may be diversification , when the original markup information 101 with relative standard is compared analysis, it may be difficult to obtain quantitative assessment result.It is logical The standardized sensing results of the output of result conversion module 131 are crossed, comparison procedure behind will become simple.
As example in the form of original 101 target of markup information and in the form of objective attribute target attribute, sensing results 91 can also be with mesh It marks category conversion and can be converted according to corresponding objective attribute target attribute form, it is thus possible to which diversified sensing results 91 is made to exist Become to standardize in form.Illustratively, for the specifying information of traffic lights, if sensing results 91 also include the information, It will be converted into the corresponding form of expression in original markup information 101.
Standardized sensing results can be compared corresponding to obtain by comparison module 132 to original markup information 101 Comparison result.For example, the target of perception output is compared with the target of original markup information 101 and judges whether standard Really, the objective attribute target attribute of perception output is compared with the corresponding objective attribute target attribute of original markup information 101 and judges whether standard Really.It, can be by for example corresponding standardized sensing results sometime stabbed and in video data source 100 in comparison procedure Corresponding time point picture frame corresponding to original markup information 101 be compared, therefore, can obtain with segmenting for example each The comparison result of period.
Statistical module 133 can quantitatively statistical comparison result to obtain the qualitative assessment result of visual perception system 90. The statistic that statistical module 133 is counted can include but is not limited to following one or more: target identification fan-out capability, overall Target identification accuracy rate, overall goal identify recall rate, and the identification of the recognition accuracy of class object, class object attribute is accurate Rate, etc..
Wherein, target identification fan-out capability reflects whether the output function with target and/or objective attribute target attribute, regardless of output Target and/or objective attribute target attribute it is whether accurate.Overall goal recognition accuracy can be by by the quantity of the target accurately perceived It is calculated divided by the quantity of the target of perception.Overall goal identifies that recall rate can be by by the number of the target correctly identified Amount is calculated divided by the quantity of all targets in original markup information 101.The recognition accuracy of class object can pass through By the quantity of the target of a certain classification (such as pedestrian) accurately perceived divided by the target of the category in original markup information 101 Quantity is calculated, for example, the recognition accuracy etc. of the recognition accuracy of traffic lights, traffic mark.Class object attribute Recognition accuracy can by the attribute value of the objective attribute target attribute for a certain classification (such as relative velocity) that will accurately perceive (such as Relative velocity is extremely) it is calculated divided by the attribute value of the objective attribute target attribute of the category in original markup information 101, class object attribute Recognition accuracy include such as the accuracy rate of relative velocity, the polynomial accuracy rate of lane line.
It will be understood that as needed, statistic can also carry out phase in the case where target and objective attribute target attribute increase or change The increase or change answered.
In this way, comparative analysis unit 130 obtains quantitative performance evaluation result in which can be convenient, and not only obtain qualitative Performance evaluation result, the Performance Evaluation of visual perception system 90 is more acurrate, also facilitates and realizes different 90 property of visual perception system The performance of comparison or same visual perception system 90 before and after changing or adjusting (vision algorithm adjustment) between energy assessment result Comparison between assessment result.In this way, being helped for such as OEM and the automatic Pilot total solution person of designing and developing In the performance from system level assessment visual perception system 90, different visual perception schemes are analyzed to mark;And consider more sensings Device integration program helps to find vision under extreme case or border condition for visual perception conceptual design developer The performance of sensory perceptual system 90, and do and targetedly improve.
Continue as shown in Figure 1, Performance Test System 10 further includes display unit 140, display unit 140 is displayed for Performance evaluation result, such as quantitative performance evaluation result.Certainly, display unit 140 can also be coupled with memory module 110, Display unit 140 can also be used to the original markup information 101 of display video data source 100 or video data source 100.
Performance Test System 10 can also specifically configure corresponding output interface, which can be by Performance Evaluation As a result it exports.
It should be noted that the storage unit 110 of Performance Test System 10, video input unit 120 and comparative analysis list Member 130 is not limited to realize in same device, such as they can be realized in different computer equipments respectively, example Such as, comparative analysis unit 130 can be realized by one or more servers, can be simultaneously to from multiple visual perception systems The sensing results 91 of system 90 are compared analysis processing.
The Performance Test System 10 of above embodiments due to using the video data source 100 with original markup information 101 into Row performance test and assessment, in such a way that video injects, visual perception system 90 no longer needs to carry out practical drive test on vehicle, Off-line test may be implemented, and may be implemented to carry out off-line test to different visual perception systems, solve different visual impressions Know the off-line test and evaluation problem of system.Also, during the performance test of different visual perception systems 90 or same vision During the not homogeneous performance test of sensory perceptual system 90, it can be tested for the property, be obtained based on same video data source 100 The consistency of performance evaluation result is good, is easy to be compared and be assessed under unified standard, is especially being determined After the performance evaluation result of amount.
Fig. 2 show the flow chart of the performance test methods of the visual perception system according to one embodiment of the invention.Below It is illustrated in conjunction with performance test methods of the Fig. 1 and Fig. 2 to one embodiment of the invention.
Step S210 provides the video data source 100 of the original markup information 101 of storage tape.For example, can be from memory module 110 obtain video data source 100.
In one embodiment, the original markup information 101 in video data source 100 be include target and its objective attribute target attribute Target level markup information.Using should automatic Pilot as example, target may include one or more of: vehicle, pedestrian (such as Including bicycle, electric vehicle and motorcycle etc.), traffic lights, traffic mark, lane line;Objective attribute target attribute includes following one kind Or it is a variety of: target whether there is, the affiliated type of target, target sizes, target range, target relative to vehicle relative velocity, The coefficient of polynomial fitting of lane line, the specifying information of traffic lights, traffic mark specifying information.It will be understood that working as vision When sensory perceptual system 90 is applied in not homologous ray, the target and/or target category of the original markup information 101 of video data source 100 Property can also correspondingly change.
Video data source 100 is injected visual perception system 90 to be tested by step S220 offline.In an embodiment In, step S220 includes the lattice that video data source 100 is converted to the input with visual perception system 90 to be tested and is matched Formula can specifically be completed by video format conversion module 121.Preferably, it is acquiescently exported in such a way that MIPI turns CSI-2.
Step S230, visual perception system 90 carries out image perception processing to video data source 100, in this way, visual perception The output of system 90 is directed to the sensing results 91 of video data source 100.
Step S240, by the original markup information in the sensing results 91 and video data source 100 of visual perception system 90 101 are compared analysis to obtain the performance evaluation result of visual perception system 90.Step S240 can be by as shown in Figure 1 The realization of comparative analysis unit 130,
In one embodiment, step S240 includes following sub-step:
Sensing results 91 are standardized conversion according to the form of the original markup information 101 of correspondence to obtain standardized perception As a result;
Standardized sensing results are compared with original markup information 101 to obtain corresponding comparison result;With
Quantitatively statistical comparison result is to obtain the quantitative performance evaluation result of the video-aware system 90.
These sub-steps can be respectively by comparing result conversion module 131, the comparison module 132 in analytical unit 130 It is respectively completed with statistical module 133.
So far, it can export or show the performance evaluation result of visual perception system 90, such as quantitative Performance Evaluation knot The performance test process of fruit, visual perception system 90 is basically completed.
It should be noted that function/operation shown in the frame of Fig. 2 can not be strictly by stream in some alternative realizations Order shown in journey figure occurs.For example, two frames successively shown actually can essentially simultaneously execute or these frames sometimes It can execute in reverse order, be specifically dependent upon related function/operation.
It should be noted that some block diagrams shown in the drawings are functional entitys, not necessarily must with physically or logically Upper independent entity is corresponding.These functional entitys can be realized using software form, or in one or more hardware moulds These functional entitys are realized in block or integrated circuit, or in heterogeneous networks and/or processor device and/or microcontroller device Middle these functional entitys of realization.
Example above primarily illustrate speech recognition equipment and its audio recognition method, the voice interactive system of the disclosure with And corresponding voice interactive method.Although only some of embodiments of the present invention are described, this field is general Logical technical staff is it is to be appreciated that the present invention can implemented without departing from its spirit in range in many other forms.Therefore, The example shown is considered as illustrative and not restrictive with embodiment, determines not departing from appended claims such as In the case where the spirit and scope of the present invention of justice, the present invention may cover various modification and replacement.
Example above primarily illustrates the Performance Test System and its performance test methods of the visual perception system of the disclosure. Although only some of embodiments of the present invention are described, those of ordinary skill in the art are it is to be appreciated that originally Invention can implemented without departing from its spirit in range in many other form.Therefore, the example shown and embodiment party Formula is considered as illustrative and not restrictive, is not departing from the spirit and scope of the present invention as defined in appended claims In the case where, the present invention may cover various modification and replacement.

Claims (15)

1. a kind of Performance Test System of visual perception system characterized by comprising
Memory module is used to provide the video data source with original markup information;
Video injection unit is used to injecting the video data source into the visual perception system to be tested offline;With And
Comparative analysis unit is used for the original mark in the sensing results and the video data source of the visual perception system Note information is compared analysis to obtain the performance evaluation result of the visual perception system, wherein the sensing results are institutes It states visual perception system and carries out the sensing results exported after image perception processing for the video data source.
2. Performance Test System as described in claim 1, which is characterized in that the original markup information be include target and its The markup information of objective attribute target attribute.
3. Performance Test System as claimed in claim 2, which is characterized in that the target includes one or more of: vehicle , pedestrian, traffic lights, traffic mark, lane line;
The objective attribute target attribute includes one or more of: target whether there is, the affiliated type of target, target sizes, target away from From, target relative to the relative velocity of vehicle, the coefficient of polynomial fitting of lane line, the specifying information of traffic lights, traffic The specifying information of mark.
4. Performance Test System as claimed in claim 1 or 2, which is characterized in that the comparative analysis unit includes:
As a result conversion module is used to for the sensing results being standardized according to the form of the correspondence original markup information Conversion is to obtain standardized sensing results;
Comparison module is used to for the standardized sensing results being compared to the original markup information corresponding to obtain Comparison result;
Statistical module is used to quantify comparison result described in geo-statistic to obtain the quantitative performance of the video-aware system and comment Estimate result.
5. Performance Test System as claimed in claim 4, which is characterized in that the statistic that the statistical module is counted includes Following one or more:
Target identification fan-out capability,
Overall goal recognition accuracy,
Overall goal identifies recall rate,
The recognition accuracy of class object,
The recognition accuracy of class object attribute.
6. Performance Test System as described in claim 1, which is characterized in that the video injection unit includes:
Video format conversion module is used to be converted to the video data source and the visual perception system to be tested Input the format to match.
7. Performance Test System as claimed in claim 6, which is characterized in that the output interface of the video injection unit is matched It is set to and acquiescently exports the video data source in such a way that MIPI CSI-2 or parallel DVP turn LVDS.
8. Performance Test System as described in claim 1, which is characterized in that further include:
Display unit is used to show the performance evaluation result.
9. a kind of performance test methods of visual perception system, which is characterized in that comprising steps of
The video data source of the original markup information of storage tape is provided;
The video data source is injected to the visual perception system to be tested offline;And
Original markup information in the sensing results and the video data source of the visual perception system is compared analysis To obtain the performance evaluation result of the visual perception system, wherein the sensing results are that the visual perception system is directed to The video data source carries out the sensing results exported after image perception processing.
10. performance test methods as claimed in claim 9, which is characterized in that the original markup information be include target and The markup information of its objective attribute target attribute.
11. performance test methods as claimed in claim 10, which is characterized in that the target includes one or more of: Vehicle, pedestrian, traffic lights, traffic mark, lane line;
The objective attribute target attribute includes one or more of: target whether there is, the affiliated type of target, target sizes, target away from From, target relative to the relative velocity of vehicle, the coefficient of polynomial fitting of lane line, the specifying information of traffic lights, traffic The specifying information of mark.
12. performance test methods as claimed in claim 9, which is characterized in that the comparative analysis step includes:
By the sensing results according to the form of the correspondence original markup information be standardized conversion it is standardized to obtain Sensing results;
The standardized sensing results are compared with the original markup information;
Comparison result described in quantitative geo-statistic is to obtain the quantitative performance evaluation result of the video-aware system.
13. performance test methods as claimed in claim 12, which is characterized in that the statistic counted include with next or It is multiple:
Target identification fan-out capability,
Overall goal recognition accuracy,
Overall goal identifies recall rate,
The recognition accuracy of class object,
The recognition accuracy of class object attribute.
14. performance test methods as claimed in claim 9, which is characterized in that the injection step includes:
The video data source is converted to the format to match with the input of the visual perception system to be tested.
15. performance test methods as claimed in claim 14, which is characterized in that in the injection step, acquiescently with The mode that MIPI CSI-2 or parallel DVP turns LVDS exports the video data source.
CN201811285542.2A 2018-10-31 2018-10-31 The Performance Test System and its performance test methods of visual perception system Pending CN109522825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811285542.2A CN109522825A (en) 2018-10-31 2018-10-31 The Performance Test System and its performance test methods of visual perception system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811285542.2A CN109522825A (en) 2018-10-31 2018-10-31 The Performance Test System and its performance test methods of visual perception system

Publications (1)

Publication Number Publication Date
CN109522825A true CN109522825A (en) 2019-03-26

Family

ID=65772964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811285542.2A Pending CN109522825A (en) 2018-10-31 2018-10-31 The Performance Test System and its performance test methods of visual perception system

Country Status (1)

Country Link
CN (1) CN109522825A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN111597993A (en) * 2020-05-15 2020-08-28 北京百度网讯科技有限公司 Data processing method and device
CN111931812A (en) * 2020-07-01 2020-11-13 广州视源电子科技股份有限公司 Visual algorithm testing method and device, storage medium and electronic equipment
CN112230228A (en) * 2020-09-30 2021-01-15 中汽院智能网联科技有限公司 Intelligent automobile vision sensor testing method based on field testing technology
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value
CN113128315A (en) * 2020-01-15 2021-07-16 宝马股份公司 Sensor model performance evaluation method, device, equipment and storage medium
CN113205070A (en) * 2021-05-27 2021-08-03 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165740A (en) * 2006-11-29 2008-07-17 Mitsubishi Electric Research Laboratories Inc Computer implemented method for measuring performance of surveillance system
CN103810854A (en) * 2014-03-03 2014-05-21 北京工业大学 Intelligent traffic algorithm evaluation method based on manual calibration
CN106127114A (en) * 2016-06-16 2016-11-16 北京数智源科技股份有限公司 Intelligent video analysis method
CN106488225A (en) * 2016-10-26 2017-03-08 昆山软龙格自动化技术有限公司 Many frame buffers are double to take the photograph with survey test card
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN107978165A (en) * 2017-12-12 2018-05-01 南京理工大学 Intersection identifier marking and signal lamp Intellisense method based on computer vision
CN108593310A (en) * 2018-06-14 2018-09-28 驭势科技(北京)有限公司 Off-line test system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165740A (en) * 2006-11-29 2008-07-17 Mitsubishi Electric Research Laboratories Inc Computer implemented method for measuring performance of surveillance system
CN103810854A (en) * 2014-03-03 2014-05-21 北京工业大学 Intelligent traffic algorithm evaluation method based on manual calibration
CN106127114A (en) * 2016-06-16 2016-11-16 北京数智源科技股份有限公司 Intelligent video analysis method
CN106488225A (en) * 2016-10-26 2017-03-08 昆山软龙格自动化技术有限公司 Many frame buffers are double to take the photograph with survey test card
CN106926800A (en) * 2017-03-28 2017-07-07 重庆大学 The vehicle-mounted visually-perceptible system of multi-cam adaptation
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN107978165A (en) * 2017-12-12 2018-05-01 南京理工大学 Intersection identifier marking and signal lamp Intellisense method based on computer vision
CN108593310A (en) * 2018-06-14 2018-09-28 驭势科技(北京)有限公司 Off-line test system and method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN113128315A (en) * 2020-01-15 2021-07-16 宝马股份公司 Sensor model performance evaluation method, device, equipment and storage medium
CN111597993A (en) * 2020-05-15 2020-08-28 北京百度网讯科技有限公司 Data processing method and device
CN111597993B (en) * 2020-05-15 2023-09-05 北京百度网讯科技有限公司 Data processing method and device
CN111931812A (en) * 2020-07-01 2020-11-13 广州视源电子科技股份有限公司 Visual algorithm testing method and device, storage medium and electronic equipment
CN112230228A (en) * 2020-09-30 2021-01-15 中汽院智能网联科技有限公司 Intelligent automobile vision sensor testing method based on field testing technology
CN112230228B (en) * 2020-09-30 2024-05-07 中汽院智能网联科技有限公司 Intelligent automobile vision sensor testing method based on field testing technology
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value
CN112816954B (en) * 2021-02-09 2024-03-26 中国信息通信研究院 Road side perception system evaluation method and system based on true value
CN113205070A (en) * 2021-05-27 2021-08-03 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system
CN113205070B (en) * 2021-05-27 2024-02-20 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system

Similar Documents

Publication Publication Date Title
CN109522825A (en) The Performance Test System and its performance test methods of visual perception system
CN110910665B (en) Signal lamp control method and device and computer equipment
CN108091141B (en) License plate recognition system
Baumann et al. A review and comparison of measures for automatic video surveillance systems
CN105590099B (en) A kind of more people's Activity recognition methods based on improvement convolutional neural networks
CN110225367A (en) It has been shown that, recognition methods and the device of object information in a kind of video
CN110633610A (en) Student state detection algorithm based on YOLO
CN107123122A (en) Non-reference picture quality appraisement method and device
KR101834838B1 (en) System and method for providing traffic information using image processing
CN109740609A (en) A kind of gauge detection method and device
CN112885130B (en) Method and device for presenting road information
CN110390362A (en) It is a kind of for detecting the method and unmanned vehicle of unmanned vehicle failure
CN108830184A (en) Black eye recognition methods and device
CN106911591A (en) The sorting technique and system of network traffics
CN109616106A (en) Vehicle-mounted control screen voice recognition process testing method, electronic equipment and system
CN109740654A (en) A kind of tongue body automatic testing method based on deep learning
CN105684062B (en) For the method and apparatus for the event message for providing the event on proximate vehicle
CN112654999B (en) Method and device for determining labeling information
CN110823596B (en) Test method and device, electronic equipment and computer readable storage medium
CN112580457A (en) Vehicle video processing method and device, computer equipment and storage medium
CN112418264A (en) Training method and device for detection model, target detection method and device and medium
CN105357516B (en) Inter-vehicle information system testboard bay image comparison method and system
CN111783635A (en) Image annotation method, device, equipment and storage medium
CN116935134A (en) Point cloud data labeling method, point cloud data labeling system, terminal and storage medium
CN111540010A (en) Road monitoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200728

Address after: Susong Road West and Shenzhen Road North, Hefei Economic and Technological Development Zone, Anhui Province

Applicant after: Weilai (Anhui) Holding Co.,Ltd.

Address before: 30 Floor of Yihe Building, No. 1 Kangle Plaza, Central, Hong Kong, China

Applicant before: NIO NEXTEV Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190326