CN113850929A - Display method, device, equipment and medium for processing marked data stream - Google Patents

Display method, device, equipment and medium for processing marked data stream Download PDF

Info

Publication number
CN113850929A
CN113850929A CN202111113433.4A CN202111113433A CN113850929A CN 113850929 A CN113850929 A CN 113850929A CN 202111113433 A CN202111113433 A CN 202111113433A CN 113850929 A CN113850929 A CN 113850929A
Authority
CN
China
Prior art keywords
data
prediction
labeled
target
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111113433.4A
Other languages
Chinese (zh)
Other versions
CN113850929B (en
Inventor
聂鑫
杨逸飞
陈飞
霍达
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuji Technology Co ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202111113433.4A priority Critical patent/CN113850929B/en
Publication of CN113850929A publication Critical patent/CN113850929A/en
Application granted granted Critical
Publication of CN113850929B publication Critical patent/CN113850929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a display method, a device, equipment and a medium for processing a marked data stream, which relate to an unmanned vehicle, and the method comprises the following steps: acquiring a data stream to be marked, which is acquired by an unmanned vehicle; inputting a data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated; when the label prediction data meet the preset construction conditions, constructing an initial input table by using the generated label prediction data and creating a fragment window; performing aggregation operation and identification adjustment on each row of labeled prediction data in the slicing window to generate a dynamic result table; and executing the associated operation by adopting the initial input table and the dynamic result table, and generating and displaying a target output table. Therefore, past, present and future data are simultaneously constructed in the same fragmentation window, and on the basis of guaranteeing the real-time performance of data stream processing, the situations that whether the labeled prediction data before and after the current moment jump or not occurs are analyzed more efficiently.

Description

Display method, device, equipment and medium for processing marked data stream
Technical Field
The present invention relates to the field of data stream processing technologies, and in particular, to a display method, device, apparatus, and medium for processing a labeled data stream.
Background
With the rapid development of information processing technology, internet technology and communication technology, the automatic driving of automobiles becomes a great trend of people's life in the future. However, mass traffic data can be generated in real time in the automatic driving process of the automobile, and how to analyze the data in time becomes an urgent problem to be solved in order to ensure the driving reliability and safety.
In the prior art, a Flink computing framework is proposed, which can combine batch processing and stream processing processes, and the core is a stream data processing engine providing data distribution and parallelization computation, and data within a certain time is analyzed by providing a data fragmentation window.
However, the data stream processing process in the prior art only supports the creation of fragment windows for the current data line and a plurality of past data lines, but there are a plurality of recognition models in the actual automatic driving process, and if the recognition results before and after the current time need to be analyzed, the prior art can only process the data in a batch processing manner, and cannot guarantee the real-time performance of the data stream processing process.
Disclosure of Invention
The invention provides a display method, a device, equipment and a medium for processing a marked data stream, which solve the technical problem that the real-time performance of a data stream processing process cannot be guaranteed because data before and after the current moment can only be analyzed in a data batch processing mode in the prior art.
The invention provides a display method for processing an annotated data stream, which comprises the following steps:
acquiring a data stream to be marked;
inputting the data stream to be labeled into a preset labeling prediction model, and sequentially generating labeling prediction data corresponding to each frame of data to be labeled in the data stream to be labeled;
when the label prediction data meet preset construction conditions, constructing an initial input table by using the generated label prediction data and creating a fragment window;
marking prediction data of each row in the slicing window to execute aggregation operation and identification adjustment, and generating a dynamic result table;
and executing the association operation by adopting the initial input table and the dynamic result table, and generating and displaying a target output table.
Optionally, the method is applied to a processor of an unmanned vehicle, the processor is in communication connection with various sensors arranged in the unmanned vehicle, and the step of acquiring the data stream to be labeled includes:
acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data and sequencing to obtain the data stream to be marked.
Optionally, the step of inputting the data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated includes:
inputting the data stream to be labeled into a preset labeling prediction model;
sequentially carrying out object identification on each frame of data to be marked in the data stream to be marked through the marking prediction model, and determining predicted objects corresponding to each frame of data to be marked respectively;
generating feature labeling information corresponding to each predicted object according to the object type of each predicted object through the labeling prediction model;
and sequencing the characteristic marking information, and constructing marking prediction data corresponding to each frame of the data to be marked respectively.
Optionally, when the labeled prediction data meets a preset construction condition, the step of constructing an initial input table and creating a fragment window by using the generated labeled prediction data includes:
when the number of generated lines of the standard prediction data reaches a preset number threshold, constructing an initial input table by using the generated labeled prediction data;
or when the generation time of the standard prediction data reaches a preset time threshold, constructing an initial input table by using the generated labeled prediction data;
and creating a slicing window by adopting a plurality of rows of labeled prediction data in the initial input table.
Optionally, the labeled prediction data is provided with an initial data identifier; the step of performing aggregation operation and identification adjustment on the prediction data labeled to each row in the slicing window to generate a dynamic result table includes:
adopting each line of labeled prediction data in the slicing window to execute aggregation operation and generate an operation result corresponding to each line of labeled prediction data;
adjusting each initial data identifier in the slicing window respectively to obtain a target data identifier corresponding to each row of the labeled prediction data;
and establishing association by adopting the target data identifier and the operation result to generate a dynamic result table.
Optionally, the step of adjusting each initial data identifier in the slicing window to obtain a target data identifier corresponding to each row of the labeled prediction data includes:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the fragment window by adopting the target identifier;
increasing the target mark according to a preset numerical value;
selecting an initial data identifier to be updated from the non-updated initial data identifiers;
updating the initial data identification to be updated by adopting the increased target identification;
skipping to execute the step of increasing the target identifier according to a preset numerical value until all the initial data identifiers are updated;
and determining all initial data identifications at the current moment as target data identifications corresponding to the labeled prediction data of each row.
Optionally, the step of performing a correlation operation by using the initial input table and the dynamic result table, and generating and displaying a target output table includes:
traversing the initial input table and the dynamic result table;
sequentially updating the initial data identification by adopting the target data identification to obtain intermediate labeling prediction data;
sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and constructing and displaying a target output table by adopting all the target labeling prediction data.
Optionally, the method further comprises:
comparing target marking prediction data of each row in the target output table, and determining the variation amplitude between the target marking prediction data of each row;
if the variation amplitude is larger than a preset variation threshold value, judging that the labeling prediction model is in an unstable state;
and if the change amplitude is smaller than or equal to the change threshold, judging that the labeling prediction model is in a stable state.
Optionally, the method further comprises:
if the label prediction model is judged to be in an unstable state, dividing all the target label prediction data into a training set and a test set according to a preset division ratio;
training the label prediction model by adopting the training set to obtain an updated label prediction model;
sequentially inputting the target labeling prediction data in the test set to the updating labeling prediction model to obtain a plurality of updating output results;
comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results;
if the updating change amplitude is larger than the change threshold value, determining the updating label prediction model as a new label prediction model, and skipping to execute the step of adopting the training set to train the label prediction model to obtain the updating label prediction model;
and if the updating change amplitude is smaller than or equal to the change threshold, judging that the updating annotation prediction model is trained completely, and determining the updating annotation prediction model as a new annotation prediction model.
The second aspect of the present invention provides a display device for processing an annotation data stream, comprising:
the data flow acquisition module is used for acquiring the data flow to be marked;
the annotation prediction module is used for inputting the data stream to be annotated into a preset annotation prediction model and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
the initial input table building module is used for building an initial input table and creating a fragment window by adopting the generated label prediction data when the label prediction data meet a preset building condition;
the window data processing module is used for performing aggregation operation and identification adjustment on the labeled prediction data of each row in the slicing window to generate a dynamic result table;
and the association display module is used for executing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table.
A third aspect of the present invention provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of the presentation method for annotating data stream processing according to the first aspect of the present invention.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the presentation method of annotation data stream processing according to the first aspect of the present invention.
According to the technical scheme, the invention has the following advantages:
the method comprises the steps that a display device for processing a marked data stream is used for obtaining a data stream to be marked, which is acquired by an unmanned vehicle in the actual running process, the data stream to be marked is input into a preset marking prediction model according to frames, and marking prediction data corresponding to each frame of data to be marked are sequentially generated; when the generated labeled prediction data meet preset construction conditions, constructing an initial input table and creating a fragmentation window by using the generated labeled prediction data, performing required aggregation operation on labeled prediction data of each row in the fragmentation window to obtain an operation result, performing identification adjustment on initial data identifications originally carried by the labeled prediction data of each row to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the labeled prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to a preset association condition, thereby generating and displaying a target output table. The method and the device solve the technical problem that the real-time performance of the data stream processing process cannot be guaranteed due to the fact that data before and after the current moment can only be analyzed in a data batch processing mode in the prior art, and the position of the current moment is adjusted in a mode of adjusting the initial data identification of the labeled prediction data, so that the data in the past, the present and the future exist in the same slicing window at the same time, and on the basis of guaranteeing the real-time performance of data stream processing, the conditions that whether the labeled prediction data before and after the current moment jump and the like exist are analyzed more efficiently.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating a method for displaying annotated data stream processing according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for displaying annotated data stream processing according to a second embodiment of the present invention;
fig. 3 is a block diagram of a display apparatus for processing an annotation data stream according to a third embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a display method, a device, equipment and a medium for processing a marked data stream, which are used for solving the technical problem that the real-time property of a data stream processing process cannot be ensured because the prior art can only analyze data before and after the current moment in a data batch processing mode.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for displaying annotated data stream processing according to an embodiment of the present invention, where the method for displaying annotated data stream processing in this embodiment may be implemented by a display device for annotated data stream processing, and the display device for annotated data stream processing may be implemented by software and/or hardware, and a specific implementation process of the method is further described below.
The invention provides a display method for processing a marked data stream, which comprises the following steps:
step 101, acquiring a data stream to be marked;
the data stream to be annotated in the embodiment of the present invention refers to a data stream acquired by a camera device or a sensing device according to a time sequence of an environment where the device is located, where the data stream includes multiple frames of data to be annotated, and each frame of data to be annotated includes at least one object, so as to provide a data basis for subsequent object identification and object annotation of the environment where the device is located.
It should be noted that the data stream to be annotated may be a video stream, an image stream, a speed stream, or other forms of streaming data, and the data to be annotated may be data such as an image.
In the embodiment of the invention, the data stream to be marked in the current environment is acquired in real time through the camera equipment or the sensing equipment and the like, and the data stream to be marked is uploaded to the display device for processing the marked data stream in real time through the wired network or the wireless network, so that the data acquisition of the data stream to be marked is realized.
Step 102, inputting a data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
the labeling prediction model in the embodiment of the invention is a model for identifying each target object in each frame of data to be labeled in the data stream to be labeled so as to determine the prediction information of each target object. The prediction information includes, but is not limited to, object location, object class, detectable (unobstructed) polygon range, predicted actual polygon range, object tracking ID, object movement speed, direction, and the like. The model can be a regional convolution neural network R-CNN, a lightweight neural network MobileNet or other object detection models and the like.
After a data stream to be annotated uploaded by the unmanned vehicle is acquired, the data stream to be annotated is input to an annotation prediction model arranged at the local or cloud end, object detection is carried out on each frame of data to be annotated through the annotation prediction model according to the input time sequence of the data stream to be annotated, corresponding annotation prediction is carried out, and annotation prediction data corresponding to each frame of data to be annotated are sequentially generated.
It should be noted that the annotation prediction data may be represented in the form of row data, and each row of annotation prediction data indicates annotation information corresponding to all objects in each frame of data to be annotated.
103, when the label prediction data meet the preset construction conditions, constructing an initial input table by using the generated label prediction data and creating a fragment window;
the preset construction condition refers to a quantity threshold or a time threshold which needs to be met after the generation quantity or the generation time of the labeled prediction data is counted in the generation process of the labeled prediction data.
In the embodiment of the invention, because the generation process of the labeled prediction data is in a streaming processing mode, after part of labeled prediction data is generated, if the generation quantity or the generation time of the labeled prediction data meets the preset construction condition, the generated labeled prediction data is adopted to construct the initial input table so as to provide a data basis for subsequent windowing processing.
104, performing aggregation operation and identification adjustment on each row of labeled prediction data in the slicing window to generate a dynamic result table;
a sharding window refers to a technical means for cutting an ever-growing infinite data set into finite data blocks in a data processing engine that processes the infinite data set, so that the processing engine can perform aggregation operations on the finite data blocks. The aggregation operation may include, but is not limited to, a minimum value, an average value, a sum/difference of squares, a percentile value, and any operation that inputs and outputs a single numerical value based on a mathematical formula, etc. The sharded window includes, but is not limited to, the following types: fixed time windows, sliding time windows, conversation time windows, counting time windows, and the like.
After the initial input table is obtained, a slicing window is created by adopting the initial input table, and aggregation operation is performed on each row of marked prediction data in the slicing window to determine an operation result in the slicing window. Meanwhile, the original identification of each row of marked prediction data can be identified and adjusted to obtain future data and past data in the same fragmentation window, and after the operation result and the target data identification are obtained, the result is stored as a dynamic result table corresponding to the initial input table.
And 105, executing association operation by adopting the initial input table and the dynamic result table, and generating and displaying a target output table.
And after the dynamic result table is obtained, generating and displaying a target input table by adopting the initial input table and the dynamic result table to execute the associated operation.
It should be noted that the association operation is a join operation, which refers to an operation of joining records of two (or more) tables together by using the same attribute in the two (or more) tables, and may include, but is not limited to, the following categories: cross connections (Cross joins), Natural joins, Inner joins, Outer joins, and self joins, among others, and the Outer joins may include Left joins, Right joins, and Full joins. Target data identification is stored in the dynamic result table, each row of data in the initial input table carries initial data identification except for the marked prediction data, the initial data identification and the initial data identification belong to the same attribute, and at the moment, association of the initial input table and the dynamic result table is realized by setting a join condition as a mode of [ initial data identification is target data identification ].
In the embodiment of the invention, a display device for processing the marked data stream is used for acquiring the data stream to be marked, which is acquired by an unmanned vehicle in the actual running process, inputting the data stream to be marked into a preset marking prediction model according to frames, and sequentially generating marking prediction data corresponding to each frame of data to be marked; when the generated labeled prediction data meet preset construction conditions, constructing an initial input table and creating a fragmentation window by using the generated labeled prediction data, performing required aggregation operation on labeled prediction data of each row in the fragmentation window to obtain an operation result, performing identification adjustment on initial data identifications originally carried by the labeled prediction data of each row to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the labeled prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to a preset association condition, thereby generating and displaying a target output table. The method and the device solve the technical problem that the real-time performance of the data stream processing process cannot be guaranteed due to the fact that data before and after the current moment can only be analyzed in a data batch processing mode in the prior art, and the position of the current moment is adjusted in a mode of adjusting the initial data identification of the labeled prediction data, so that the data in the past, the present and the future exist in the same slicing window at the same time, and on the basis of guaranteeing the real-time performance of data stream processing, the conditions that whether the labeled prediction data before and after the current moment jump and the like exist are analyzed more efficiently.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing a markup data stream according to a second embodiment of the present invention.
The invention provides a display method for processing a marked data stream, which comprises the following steps:
step 201, acquiring a data stream to be marked;
the data stream to be annotated in the embodiment of the present invention refers to a data stream acquired by a camera device or a sensing device according to a time sequence of an environment where the device is located, where the data stream includes multiple frames of data to be annotated, and each frame of data to be annotated includes at least one object, so as to provide a data basis for subsequent object identification and object annotation of the environment where the device is located.
Optionally, the method is applied to a processor of an unmanned vehicle, the processor is connected with various sensors arranged on the unmanned vehicle in a communication way, and the step 201 can comprise the following sub-steps:
acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data and sequencing to obtain the data stream to be marked.
The unmanned vehicle is an automatic vehicle (Self-driving automobile), also called an unmanned vehicle, a computer-driven vehicle, or a wheeled mobile robot, and is an intelligent vehicle that realizes unmanned driving through a computer system. By means of the cooperation of artificial intelligence, visual calculation, radar, monitoring device and GPS, the computer can control motor vehicle without any active operation of human.
In the embodiment of the invention, the environmental data of the current environment of the unmanned vehicle is acquired in real time through various sensors such as a camera device, an environment sensor, a temperature sensor, a speed sensor and the like arranged on the unmanned vehicle. The method comprises the steps of uploading the environmental data to a display device for processing the marked data stream in a processor of the unmanned vehicle in real time through a wired network or a wireless network according to the acquisition time sequence of the environmental data, and sequencing the environmental data according to the acquisition time sequence by the device, so that the data acquisition of the data stream to be marked is realized.
Step 202, inputting a data stream to be annotated into a preset annotation prediction model, and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
optionally, step 202 may include the following sub-steps:
inputting a data stream to be labeled into a preset labeling prediction model;
sequentially carrying out object identification on each frame of data to be marked in a data stream to be marked through a marking prediction model, and determining prediction objects corresponding to each frame of data to be marked respectively;
generating characteristic marking information corresponding to each predicted object according to the object type of each predicted object through a marking prediction model;
sequencing all the characteristic marking information, and constructing marking prediction data respectively corresponding to each frame of data to be marked.
The labeling prediction model in the embodiment of the invention is a model for identifying each target object in each frame of data to be labeled in the data stream to be labeled so as to determine the prediction information of each target object. The prediction information includes, but is not limited to, object location, object class, detectable (unobstructed) polygon range, predicted actual polygon range, object tracking ID, object movement speed, direction, and the like. The model can be a regional convolution neural network R-CNN, a lightweight neural network MobileNet or other object detection models and the like.
In the embodiment of the invention, after the data stream to be labeled is obtained, the data stream to be labeled is input into a preset labeling prediction model according to frames, object identification is carried out on each frame of data to be labeled through the labeling prediction model so as to determine a predicted object in each frame of data to be labeled, and when the predicted object is determined, the feature labeling information corresponding to the predicted object can be determined according to the object type of the predicted object through the labeling prediction model.
For example, the existence of the predicted objects such as cars, pedestrians, bicycles, trucks and the like is identified in the data to be labeled, and at this time, different feature labeling information can be extracted for each predicted object respectively based on different types of the objects. If the car and the truck belong to motor vehicles, characteristic marking information such as the position, the moving speed, the direction, the size and the like of the truck and the car can be respectively determined; the bicycle belongs to a non-motor vehicle, and can predict the detectable range, the predicted direction, the predicted speed and other characteristic marking information of the bicycle; the pedestrian belongs to an important attention object, and characteristic marking information such as the position, the traveling intention and the like of the pedestrian can be predicted.
After the feature labeling information is obtained, the feature labeling information can be recorded in a row data mode, all the feature labeling information is sequenced by taking the predicted object as a unit, and the feature labeling information corresponding to all the predicted objects in one frame of data to be labeled is recorded in each row, so that the labeled predicted data corresponding to each frame of data to be labeled is constructed and obtained.
Step 203, when the label prediction data meets the preset construction conditions, adopting the generated label prediction data to construct an initial input table and create a fragment window; marking the prediction data and setting an initial data identifier;
optionally, step 203 may comprise the sub-steps of:
when the number of generated lines of the standard prediction data reaches a preset number threshold, constructing an initial input table by using the generated labeled prediction data;
or when the generation time of the standard prediction data reaches a preset time threshold, constructing an initial input table by using the generated labeled prediction data;
and creating a slicing window by using a plurality of rows of marked prediction data in the initial input table.
A sharding window refers to a technical means for cutting an ever-growing infinite data set into finite data blocks in a data processing engine that processes the infinite data set, so that the processing engine can perform aggregation operations on the finite data blocks. The aggregation operation may include, but is not limited to, a minimum value, an average value, a sum/difference of squares, a percentile value, and any operation that inputs and outputs a single numerical value based on a mathematical formula, etc. The sharded window includes, but is not limited to, the following types: fixed time windows, sliding time windows, conversation time windows, counting time windows, and the like.
In an example of the present invention, when the number of generation lines of the annotation prediction data generated by the annotation prediction model reaches a preset number threshold, the generated annotation prediction data may be sorted according to the number of frames to construct an initial input table; or, when the generation time of generating the annotation prediction data by the annotation prediction model reaches a preset time threshold, the generated annotation prediction data can be sorted according to the frame number, and an initial input table is constructed.
After the initial input table is constructed, a part of rows or all rows can be intercepted to mark prediction data so as to construct a slicing window.
It should be noted that the initial data identifier may be a frame number of the data stream to be annotated, for example, the annotation prediction data is extracted from the 18 th frame of data to be annotated of the data stream to be annotated, where the initial data identifier may be 18.
204, adopting each line of labeled prediction data in the slicing window to execute aggregation operation and generating an operation result corresponding to each line of labeled prediction data;
in the embodiment of the present invention, after selecting multiple rows or all of the labeled prediction data in the initial input table to construct the segmentation window, the category of the aggregation operation, such as the aggregation operation of the mean value, the maximum value, the minimum value, the sum of squares, the square difference, the percentile value, and the like, may be selected according to the user's needs, the aggregation operation is performed on the labeled prediction data in each row in the segmentation window, and the same category data in the same row or the same category data in other rows are used for calculation, so as to generate the operation result corresponding to the labeled prediction data in each row.
In specific implementation, the type of the fragment window is taken as a fixed time window tx-10To txCalculating the average of the object's velocity, for example, may use the initial data as denoted tx-10To txThe average value operation is carried out on the object speed in each row of the marked prediction data, and the initial data identifier t is obtainedx-10To txThe average value in the table is recorded as an output of each line by referring to the average value as an operation result of the prediction data for each line.
Step 205, respectively adjusting each initial data identifier in the slicing window to obtain a target data identifier corresponding to each row of labeled prediction data;
optionally, step 205 may comprise the following sub-steps:
selecting a target identifier from initial data identifiers in a fragment window;
updating the last initial data identification in the fragment window by adopting the target identification;
increasing the target identification according to a preset numerical value;
selecting an initial data identifier to be updated from the non-updated initial data identifiers;
updating the initial data identification to be updated by adopting the increased target identification;
skipping to execute the step of increasing the target identifier according to a preset numerical value until all the initial data identifiers are updated;
and determining all initial data identifications at the current moment as target data identifications corresponding to the labeled prediction data of each row.
In the embodiment of the invention, any identifier can be selected from initial data identifiers in a fragmentation window as a target identifier, then the target identifier is adopted to replace and update a last initial data identifier in the fragmentation window, after the last initial data identifier is replaced and updated, the target identifier is increased according to a preset value, in the un-updated initial data identifier, the last initial data identifier is selected as an initial data identifier to be updated, and then the initial data identifier to be updated is updated by adopting the increased target identifier; and increasing the target identifier again according to the preset numerical value until all the initial data identifiers are updated. At this time, all the initial data identifiers at the current time may be determined as the target data identifiers corresponding to the labeled prediction data of each row respectively.
For example, the fragmentation window has an initial data mark tx-10To txMarking the predicted data in the time, selecting the target mark as tx-5The last initial data in the time slice window is marked as tx-10The initial data may be identified as tx-10Replacement update to tx-5(ii) a Increasing the target mark to t according to a preset value of 1x-4(ii) a The initial data not updated at this time is identified as tx-9To txFrom which the initial data identity t of the last bit is selectedx-9Replacing the data identifier to be updated with the increased target identifier tx-4. And repeating the steps until all the initial data identifiers are updated, and obtaining a target data identifier tx-5To tx+5The annotation prediction data of (1).
Step 206, establishing association between the target data identifier and the operation result to generate a dynamic result;
in the embodiment of the invention, after the target data identifier and the operation result are obtained, the target data identifier and the operation result can be used for establishing association to generate the dynamic result table.
And step 207, executing the association operation by adopting the initial input table and the dynamic result table, and generating and displaying a target output table.
Further, step 207 may comprise the following sub-steps:
traversing the initial input table and the dynamic result table;
sequentially updating the initial data identification by adopting the target data identification to obtain intermediate labeling prediction data;
sequentially associating the intermediate annotation prediction data and the operation result to obtain target annotation prediction data;
and constructing and displaying a target output table by adopting all target labeling prediction data.
In a specific implementation, the stream processing system may support multiple dynamic tables at the same time, and after the dynamic result table is obtained, each initial data identifier in the initial input table may be sequentially updated by traversing the initial input table and the dynamic result table and using the target data identifier, so as to obtain the intermediate annotation prediction data. And then sequentially associating the intermediate annotation prediction data and the operation result to obtain target annotation prediction data. Specifically, after the operation result corresponding to the tile window is obtained, the operation result may be recorded for each row of middle labeled prediction data, for example, the operation result is an average value a, and the operation result of each row of middle labeled prediction data may be recorded as a.
And after target marking prediction data are obtained, adopting all target identification prediction data to construct a target output table and displaying the target output table, and providing further analysis for subsequent engineers in a UI (user interface) display mode so as to achieve the effect of acquiring window characteristic information of each object in the past and future related ranges in a stream processing system.
Optionally, the method further comprises:
comparing the target labeling prediction data of each row in the target output table, and determining the variation amplitude between the target labeling prediction data of each row;
if the variation amplitude is larger than a preset variation threshold value, judging that the labeling prediction model is in an unstable state;
and if the change amplitude is smaller than or equal to the change threshold, judging that the labeling prediction model is in a stable state.
In the embodiment of the present invention, after the target output table is obtained, in order to determine whether unexpected changes occur in the target identification prediction data of each line in the same time window, the target marking prediction data of each line in the target output table respectively represent object marking prediction results within several frames before and after the current time. At this time, the target marking prediction data of each row in the target output table can be adopted for comparison, and the variation amplitude between the target marking prediction data of each row is determined by comparing the object marking prediction results at the same position in the target marking prediction data of each row. If the change amplitude is larger than the preset change threshold, indicating that the situation that the identification error occurs in a plurality of frames before and after the current moment in the current labeling prediction model, judging that the labeling prediction model is in an unstable state, and waiting for the subsequent further optimization and improvement of the model; if the variation amplitude is smaller than or equal to the variation threshold, the prediction result of the labeling prediction model at the moment is not subjected to an unallowable error in several frames before and after the current moment, and the labeling prediction model is judged to be in a stable state at the moment.
It should be noted that the variation threshold may include various types, including but not limited to the maximum allowable transition number of the same object class, the allowable difference value of the same object size determination, the maximum variation number of the tracking ID of the same object, and the variation amplitude.
Further, the method further comprises:
if the label prediction model is judged to be in an unstable state, dividing all target label prediction data into a training set and a test set according to a preset division ratio;
training the label prediction model by adopting a training set to obtain an updated label prediction model;
sequentially inputting target labeling prediction data in the test set to the updating labeling prediction model to obtain a plurality of updating output results;
comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results;
if the updating change amplitude is larger than the change threshold value, determining the updating annotation prediction model as a new annotation prediction model, and skipping to execute the step of training the annotation prediction model by adopting a training set to obtain the updating annotation prediction model;
and if the updating change amplitude is smaller than or equal to the change threshold, judging that the updating annotation prediction model is trained completely, and determining the updating annotation prediction model as a new annotation prediction model.
In another example of the present invention, if it is determined that the annotation prediction model is in an unstable state, it indicates that the effective data amount contained in the target annotation prediction data at this time is large, and in order to further optimize the model, all the target annotation prediction data may be divided into the training set and the test set according to a preset division ratio. Wherein, preset the partition ratio can be training set: test sets 9:1, 8:2, etc., as embodiments of the present invention are not limited in this respect.
After a training set is obtained, training the labeling prediction model by adopting the training set, inputting target labeling prediction data in the training set into the labeling prediction model one by one, and adjusting model parameters of the labeling prediction model after a labeling prediction result is generated; and further processing the target labeling prediction data again until the accuracy of the labeling prediction result of the model reaches a preset threshold value, and obtaining an updated labeling prediction model at the moment. The adjusting method of the model parameters can be a gradient descent method and the like.
After the test set and the updated annotation prediction model are obtained, the test set can be adopted to further test the performance of the updated annotation prediction model, and target annotation prediction data in the test set are sequentially input to the updated annotation prediction model to obtain a plurality of updated output results; comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results; if the updating change amplitude is larger than the change threshold value, further carrying out repeated training on the updating labeling prediction model by adopting a training set; and if the updating change amplitude is smaller than or equal to the change threshold, judging that the updating annotation prediction model is trained completely, and determining the updating annotation prediction model as a new annotation prediction model.
In the embodiment of the invention, a display device for processing the marked data stream is used for acquiring the data stream to be marked, which is acquired by an unmanned vehicle in the actual running process, inputting the data stream to be marked into a preset marking prediction model according to frames, and sequentially generating marking prediction data corresponding to each frame of data to be marked; when the generated labeled prediction data meet preset construction conditions, constructing an initial input table and creating a fragmentation window by using the generated labeled prediction data, performing required aggregation operation on labeled prediction data of each row in the fragmentation window to obtain an operation result, performing identification adjustment on initial data identifications originally carried by the labeled prediction data of each row to obtain target data identifications, and sequencing the operation result and the target data identifications according to the original sequence of the labeled prediction data to generate a dynamic result table; and then, executing association operation by adopting the initial input table and the dynamic result table according to a preset association condition, thereby generating and displaying a target output table. The method and the device solve the technical problem that the real-time performance of the data stream processing process cannot be guaranteed due to the fact that data before and after the current moment can only be analyzed in a data batch processing mode in the prior art, and the position of the current moment is adjusted in a mode of adjusting the initial data identification of the labeled prediction data, so that the data in the past, the present and the future exist in the same slicing window at the same time, and on the basis of guaranteeing the real-time performance of data stream processing, the conditions that whether the labeled prediction data before and after the current moment jump and the like exist are analyzed more efficiently.
Referring to fig. 3, fig. 3 is a block diagram illustrating a display apparatus for processing a labeled data stream according to a third embodiment of the present invention.
The embodiment of the invention provides a display device for processing a marked data stream, which comprises:
a data stream obtaining module 301, configured to obtain a data stream to be labeled;
the annotation prediction module 302 is configured to input a data stream to be annotated into a preset annotation prediction model, and sequentially generate annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
an initial input table building module 303, configured to build an initial input table and create a fragment window by using the generated labeled prediction data when the labeled prediction data meets a preset building condition;
a window data processing module 304, configured to perform aggregation operation and identifier adjustment on each row of labeled prediction data in the tile window, and generate a dynamic result table;
and the association display module 305 is configured to execute association operations by using the initial input table and the dynamic result table, generate a target output table, and display the target output table.
Optionally, the apparatus is applied to a processor of an unmanned vehicle, the processor is in communication connection with various sensors disposed on the unmanned vehicle, and the data stream obtaining module 301 includes:
the environment data acquisition submodule is used for acquiring environment data of the environment where the unmanned vehicle is located in real time through the various sensors;
and the data stream generation submodule is used for receiving the environmental data according to the acquisition time sequence of the environmental data and sequencing the environmental data to obtain the data stream to be marked.
Optionally, the annotation prediction module 302 includes:
the data stream input submodule is used for inputting the data stream to be labeled into a preset labeling prediction model;
the object identification submodule is used for sequentially carrying out object identification on each frame of data to be marked in the data stream to be marked through the marking prediction model and determining a prediction object corresponding to each frame of data to be marked;
the characteristic labeling information generation submodule is used for generating characteristic labeling information corresponding to each predicted object according to the object type of each predicted object through the labeling prediction model;
and the information sequencing submodule is used for sequencing the characteristic marking information and constructing marking prediction data corresponding to each frame of data to be marked.
Optionally, the initial input table building module 303 includes:
the initial input table building submodule is used for building an initial input table by adopting the generated labeled prediction data when the generation line number of the standard prediction data reaches a preset number threshold; or when the generation time of the standard prediction data reaches a preset time threshold, constructing an initial input table by using the generated labeled prediction data;
and the fragmentation window creating submodule is used for creating a fragmentation window by adopting a plurality of rows of labeled prediction data in the initial input table.
Optionally, the labeled prediction data is provided with an initial data identifier; the window data processing module 304 includes:
the aggregation operation sub-module is used for executing aggregation operation by adopting the labeled prediction data of each line in the slicing window to generate an operation result corresponding to the labeled prediction data of each line;
the mark adjusting submodule is used for respectively adjusting each initial data mark in the slicing window to obtain a target data mark corresponding to each row of the marked prediction data;
and the result table association submodule is used for establishing association between the target data identifier and the operation result to generate a dynamic result table.
Optionally, the identifier adjusting submodule is specifically configured to:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the fragment window by adopting the target identifier;
increasing the target mark according to a preset numerical value;
selecting an initial data identifier to be updated from the non-updated initial data identifiers;
updating the initial data identification to be updated by adopting the increased target identification;
skipping to execute the step of increasing the target identifier according to a preset numerical value until all the initial data identifiers are updated;
and determining all initial data identifications at the current moment as target data identifications corresponding to the labeled prediction data of each row.
Optionally, the association display module 305 includes:
the traversal submodule is used for traversing the initial input table and the dynamic result table;
the identification updating submodule is used for sequentially updating the initial data identification by adopting the target data identification to obtain intermediate labeling prediction data;
the data and result correlation submodule is used for sequentially correlating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and the target output table constructing and displaying submodule is used for constructing and displaying a target output table by adopting all the target labeling prediction data.
Optionally, the apparatus further comprises:
the variation amplitude determining module is used for comparing target marking prediction data of each row in the target output table and determining the variation amplitude between the target marking prediction data of each row;
the unstable state judgment module is used for judging that the labeling prediction model is in an unstable state if the variation amplitude is larger than a preset variation threshold;
and the steady state judgment module is used for judging that the labeling prediction model is in a steady state if the change amplitude is smaller than or equal to the change threshold.
Optionally, the apparatus further comprises:
the data set dividing module is used for dividing all the target labeling prediction data into a training set and a test set according to a preset dividing proportion if the labeling prediction model is judged to be in an unstable state;
the model training module is used for training the label prediction model by adopting the training set to obtain an updated label prediction model;
the model testing module is used for sequentially inputting the target labeling prediction data in the test set to the updating labeling prediction model to obtain a plurality of updating output results;
the updating change amplitude calculation module is used for comparing the plurality of updating output results and determining the updating change amplitude among the updating output results;
a training loop module, configured to determine the updated annotation prediction model as a new annotation prediction model if the updated change amplitude is greater than the change threshold, and skip to execute the step of training the annotation prediction model by using the training set to obtain an updated annotation prediction model;
and the training completion judging module is used for judging that the training of the updated labeling prediction model is completed and determining the updated labeling prediction model as a new labeling prediction model if the updating change amplitude is smaller than or equal to the change threshold.
An embodiment of the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the display method for processing the annotated data stream according to any embodiment of the present invention.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed, implements a presentation method for processing an annotated data stream according to any embodiment of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A display method for processing a label data stream is characterized by comprising the following steps:
acquiring a data stream to be marked;
inputting the data stream to be labeled into a preset labeling prediction model, and sequentially generating labeling prediction data corresponding to each frame of data to be labeled in the data stream to be labeled;
when the label prediction data meet preset construction conditions, constructing an initial input table by using the generated label prediction data and creating a fragment window;
marking prediction data of each row in the slicing window to execute aggregation operation and identification adjustment, and generating a dynamic result table;
and executing the association operation by adopting the initial input table and the dynamic result table, and generating and displaying a target output table.
2. The method of claim 1, wherein the processor is applied to an unmanned vehicle and is in communication connection with various sensors arranged on the unmanned vehicle, and the step of acquiring the data stream to be labeled comprises:
acquiring environmental data of the environment where the unmanned vehicle is located in real time through the various sensors;
and receiving the environmental data according to the acquisition time sequence of the environmental data and sequencing to obtain the data stream to be marked.
3. The method according to claim 1 or 2, wherein the step of inputting the data stream to be labeled into a preset labeling prediction model and sequentially generating labeling prediction data corresponding to each frame of data to be labeled in the data stream to be labeled comprises:
inputting the data stream to be labeled into a preset labeling prediction model;
sequentially carrying out object identification on each frame of data to be marked in the data stream to be marked through the marking prediction model, and determining predicted objects corresponding to each frame of data to be marked respectively;
generating feature labeling information corresponding to each predicted object according to the object type of each predicted object through the labeling prediction model;
and sequencing the characteristic marking information, and constructing marking prediction data corresponding to each frame of the data to be marked respectively.
4. The method according to claim 1 or 2, wherein the step of constructing an initial input table and creating a fragment window by using the generated labeled prediction data when the labeled prediction data meets a preset construction condition comprises:
when the number of generated lines of the standard prediction data reaches a preset number threshold, constructing an initial input table by using the generated labeled prediction data;
or when the generation time of the standard prediction data reaches a preset time threshold, constructing an initial input table by using the generated labeled prediction data;
and creating a slicing window by adopting a plurality of rows of labeled prediction data in the initial input table.
5. The method according to claim 1 or 2, wherein the annotation prediction data is provided with an initial data identification; the step of performing aggregation operation and identification adjustment on the prediction data labeled to each row in the slicing window to generate a dynamic result table includes:
adopting each line of labeled prediction data in the slicing window to execute aggregation operation and generate an operation result corresponding to each line of labeled prediction data;
adjusting each initial data identifier in the slicing window respectively to obtain a target data identifier corresponding to each row of the labeled prediction data;
and establishing association by adopting the target data identifier and the operation result to generate a dynamic result table.
6. The method according to claim 5, wherein the step of adjusting each of the initial data identifiers in the slicing window to obtain a target data identifier corresponding to each row of the labeled prediction data comprises:
selecting a target identifier from the initial data identifiers in the slicing window;
updating the last initial data identifier in the fragment window by adopting the target identifier;
increasing the target mark according to a preset numerical value;
selecting an initial data identifier to be updated from the non-updated initial data identifiers;
updating the initial data identification to be updated by adopting the increased target identification;
skipping to execute the step of increasing the target identifier according to a preset numerical value until all the initial data identifiers are updated;
and determining all initial data identifications at the current moment as target data identifications corresponding to the labeled prediction data of each row.
7. The method of claim 5, wherein the step of performing the correlation operation using the initial input table and the dynamic result table to generate and display the target output table comprises:
traversing the initial input table and the dynamic result table;
sequentially updating the initial data identification by adopting the target data identification to obtain intermediate labeling prediction data;
sequentially associating the intermediate annotation prediction data with the operation result to obtain target annotation prediction data;
and constructing and displaying a target output table by adopting all the target labeling prediction data.
8. The method of claim 7, further comprising:
comparing target marking prediction data of each row in the target output table, and determining the variation amplitude between the target marking prediction data of each row;
if the variation amplitude is larger than a preset variation threshold value, judging that the labeling prediction model is in an unstable state;
and if the change amplitude is smaller than or equal to the change threshold, judging that the labeling prediction model is in a stable state.
9. The method of claim 8, further comprising:
if the label prediction model is judged to be in an unstable state, dividing all the target label prediction data into a training set and a test set according to a preset division ratio;
training the label prediction model by adopting the training set to obtain an updated label prediction model;
sequentially inputting the target labeling prediction data in the test set to the updating labeling prediction model to obtain a plurality of updating output results;
comparing the plurality of updating output results, and determining the updating change amplitude among the updating output results;
if the updating change amplitude is larger than the change threshold value, determining the updating label prediction model as a new label prediction model, and skipping to execute the step of adopting the training set to train the label prediction model to obtain the updating label prediction model;
and if the updating change amplitude is smaller than or equal to the change threshold, judging that the updating annotation prediction model is trained completely, and determining the updating annotation prediction model as a new annotation prediction model.
10. A presentation apparatus for annotated data stream processing, comprising:
the data flow acquisition module is used for acquiring the data flow to be marked;
the annotation prediction module is used for inputting the data stream to be annotated into a preset annotation prediction model and sequentially generating annotation prediction data corresponding to each frame of data to be annotated in the data stream to be annotated;
the initial input table building module is used for building an initial input table and creating a fragment window by adopting the generated label prediction data when the label prediction data meet a preset building condition;
the window data processing module is used for performing aggregation operation and identification adjustment on the labeled prediction data of each row in the slicing window to generate a dynamic result table;
and the association display module is used for executing association operation by adopting the initial input table and the dynamic result table, generating a target output table and displaying the target output table.
11. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of the presentation method for annotation data stream processing according to any one of claims 1 to 9.
12. A computer-readable storage medium on which a computer program is stored, the computer program, when executed, implementing a presentation method of annotation data stream processing according to any one of claims 1 to 9.
CN202111113433.4A 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream Active CN113850929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113433.4A CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113433.4A CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Publications (2)

Publication Number Publication Date
CN113850929A true CN113850929A (en) 2021-12-28
CN113850929B CN113850929B (en) 2023-05-26

Family

ID=78979336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113433.4A Active CN113850929B (en) 2021-09-18 2021-09-18 Display method, device, equipment and medium for processing annotation data stream

Country Status (1)

Country Link
CN (1) CN113850929B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456069A (en) * 2011-08-03 2012-05-16 中国人民解放军国防科学技术大学 Incremental aggregate counting and query methods and query system for data stream
US20160004751A1 (en) * 2013-02-15 2016-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Optimized query execution in a distributed data stream processing environment
CN105872432A (en) * 2016-04-21 2016-08-17 天津大学 Rapid self-adaptive frame rate conversion device and method
CN107230349A (en) * 2017-05-23 2017-10-03 长安大学 A kind of online real-time short time traffic flow forecasting method
WO2017185576A1 (en) * 2016-04-25 2017-11-02 百度在线网络技术(北京)有限公司 Multi-streaming data processing method, system, storage medium, and device
CN108462605A (en) * 2018-02-06 2018-08-28 国家电网公司 A kind of prediction technique and device of data
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110807123A (en) * 2019-10-29 2020-02-18 中国科学院上海微***与信息技术研究所 Vehicle length calculation method, device and system, computer equipment and storage medium
CN111090688A (en) * 2019-12-23 2020-05-01 北京奇艺世纪科技有限公司 Smoothing processing method and device for time sequence data
CN111970584A (en) * 2020-07-08 2020-11-20 国网宁夏电力有限公司电力科学研究院 Method, device and equipment for processing data and storage medium
US20210191915A1 (en) * 2019-08-02 2021-06-24 Timescale, Inc. Type-specific compression in database systems
CN113138960A (en) * 2021-05-17 2021-07-20 毕晓柏 Data storage method and system based on cloud storage space adjustment
CN113191905A (en) * 2021-04-23 2021-07-30 北京金堤征信服务有限公司 Shareholder data processing method and device, electronic equipment and readable storage medium
CN113408671A (en) * 2021-08-18 2021-09-17 成都时识科技有限公司 Object identification method and device, chip and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456069A (en) * 2011-08-03 2012-05-16 中国人民解放军国防科学技术大学 Incremental aggregate counting and query methods and query system for data stream
US20160004751A1 (en) * 2013-02-15 2016-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Optimized query execution in a distributed data stream processing environment
CN105872432A (en) * 2016-04-21 2016-08-17 天津大学 Rapid self-adaptive frame rate conversion device and method
WO2017185576A1 (en) * 2016-04-25 2017-11-02 百度在线网络技术(北京)有限公司 Multi-streaming data processing method, system, storage medium, and device
CN107230349A (en) * 2017-05-23 2017-10-03 长安大学 A kind of online real-time short time traffic flow forecasting method
CN108462605A (en) * 2018-02-06 2018-08-28 国家电网公司 A kind of prediction technique and device of data
CN109635793A (en) * 2019-01-31 2019-04-16 南京邮电大学 A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
US20210191915A1 (en) * 2019-08-02 2021-06-24 Timescale, Inc. Type-specific compression in database systems
CN110807123A (en) * 2019-10-29 2020-02-18 中国科学院上海微***与信息技术研究所 Vehicle length calculation method, device and system, computer equipment and storage medium
CN111090688A (en) * 2019-12-23 2020-05-01 北京奇艺世纪科技有限公司 Smoothing processing method and device for time sequence data
CN111970584A (en) * 2020-07-08 2020-11-20 国网宁夏电力有限公司电力科学研究院 Method, device and equipment for processing data and storage medium
CN113191905A (en) * 2021-04-23 2021-07-30 北京金堤征信服务有限公司 Shareholder data processing method and device, electronic equipment and readable storage medium
CN113138960A (en) * 2021-05-17 2021-07-20 毕晓柏 Data storage method and system based on cloud storage space adjustment
CN113408671A (en) * 2021-08-18 2021-09-17 成都时识科技有限公司 Object identification method and device, chip and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘振军;张雷;: "大数据在铁路工务管理***中的应用研究", 企业技术开发 *

Also Published As

Publication number Publication date
CN113850929B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
CN110364008B (en) Road condition determining method and device, computer equipment and storage medium
CN109754594B (en) Road condition information acquisition method and equipment, storage medium and terminal thereof
WO2020248386A1 (en) Video analysis method and apparatus, computer device and storage medium
CN108256431B (en) Hand position identification method and device
CN114415628A (en) Automatic driving test method and device, electronic equipment and storage medium
CN111554105B (en) Intelligent traffic identification and statistics method for complex traffic intersection
CN104819726A (en) Navigation data processing method, navigation data processing device and navigation terminal
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN110021161B (en) Traffic flow direction prediction method and system
CN116046008A (en) Situation awareness-based route planning method, system and efficiency evaluation device
WO2021146906A1 (en) Test scenario simulation method and apparatus, computer device, and storage medium
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN112164223B (en) Intelligent traffic information processing method and device based on cloud platform
CN117456482A (en) Abnormal event identification method and system for traffic monitoring scene
CN115984634B (en) Image detection method, apparatus, device, storage medium, and program product
CN115114786B (en) Assessment method, system and storage medium for traffic flow simulation model
CN115019508B (en) Road monitoring traffic flow simulation method, device, equipment and medium based on machine learning
CN113850929B (en) Display method, device, equipment and medium for processing annotation data stream
CN115564800A (en) Action track prediction method and device
Huang et al. A bus crowdedness sensing system using deep-learning based object detection
CN113724493A (en) Analysis method and device of flow channel, storage medium and terminal
Li et al. Personalized trajectory prediction for driving behavior modeling in ramp-merging scenarios
CN106097751A (en) Vehicle travel control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231031

Address after: Room 908, Building A2, 23 Spectral Middle Road, Huangpu District, Guangzhou City, Guangdong Province, 510000

Patentee after: Guangzhou Yuji Technology Co.,Ltd.

Address before: Room 687, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou, Guangdong 510555

Patentee before: Guangzhou WeRide Technology Limited Company

TR01 Transfer of patent right