CN114330543A - Model training method, device, apparatus, medium, and program product - Google Patents

Model training method, device, apparatus, medium, and program product Download PDF

Info

Publication number
CN114330543A
CN114330543A CN202111627016.1A CN202111627016A CN114330543A CN 114330543 A CN114330543 A CN 114330543A CN 202111627016 A CN202111627016 A CN 202111627016A CN 114330543 A CN114330543 A CN 114330543A
Authority
CN
China
Prior art keywords
image
fuzzy
original image
labeling
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111627016.1A
Other languages
Chinese (zh)
Inventor
朱双贺
曹亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202111627016.1A priority Critical patent/CN114330543A/en
Publication of CN114330543A publication Critical patent/CN114330543A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The present disclosure provides a model training method, apparatus, device, medium, and program product, which relate to the field of computer technologies, and in particular, to the field of car networking and intelligent cabin technologies. The specific implementation scheme is as follows: acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring an original image; acquiring an original image; determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation. According to the technical scheme of the embodiment of the disclosure, the leakage of the privacy data in the image labeling process can be avoided.

Description

Model training method, device, apparatus, medium, and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a medium, and a program product for model training in the field of car networking and intelligent cockpit technologies.
Background
With the rapid development of artificial intelligence technology, computer vision technology is used in more and more fields. For example, the field of automatic driving, the field of mobile robots, and the like. In order to achieve automatic travel of an autonomous vehicle or robot, it is often necessary to train the model through a large number of labeled image samples.
In the process of image annotation, annotation personnel generally need to participate, and the annotation personnel can acquire some sensitive information contained in the image, so that the leakage of privacy information is easily caused. Therefore, the attention of developers is paid to how to avoid the leakage of privacy information in the image annotation process.
Disclosure of Invention
The present disclosure provides a model training method, apparatus, device, medium, and program product.
According to an aspect of the present disclosure, there is provided a model training method, including:
acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring an original image;
acquiring an original image;
determining the incidence relation between the labeling result and the original image;
and training a target network model based on the labeling result, the original image and the incidence relation.
According to another aspect of the present disclosure, there is provided a model training method, including:
carrying out fuzzy processing on the original image to obtain a fuzzy image;
sending the fuzzy image to an annotation device, and indicating the annotation device to annotate the fuzzy image to obtain an annotation result of the fuzzy image;
sending the raw image to a model training device for instructing the model training device to perform: determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation.
According to another aspect of the present disclosure, there is provided a model training apparatus including:
the annotation result acquisition module is used for acquiring an annotation result of the fuzzy image; the blurred image is obtained by blurring an original image;
the original image acquisition module is used for acquiring an original image;
the incidence relation determining module is used for determining the incidence relation between the annotation result and the original image;
and the model training module is used for training a target network model based on the labeling result, the original image and the incidence relation.
According to another aspect of the present disclosure, there is provided a model training apparatus including:
the fuzzy image acquisition module is used for carrying out fuzzy processing on the original image to obtain a fuzzy image;
the fuzzy image sending module is used for sending the fuzzy image to the labeling equipment and indicating the labeling equipment to label the fuzzy image to obtain a labeling result of the fuzzy image;
an original image sending module, configured to send an original image to a model training device, and instruct the model training device to perform the following: determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the model training method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the model training method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the model training method of any of the embodiments of the present disclosure.
The embodiment of the disclosure can avoid the disclosure of the privacy data in the image labeling process.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1a is a schematic diagram of a model training method provided in accordance with an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of a fuzzy annotation box provided according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a model training method provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a model training method provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a model training method provided in accordance with an embodiment of the present disclosure;
FIG. 5 is a scene diagram of model training provided according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a model training apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a model training apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 8 is a block diagram of an electronic device for implementing a model training method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1a is a flowchart of a model training method provided according to an embodiment of the present disclosure, which may be applied to a case of performing image annotation based on a blurred image. The model training method according to this embodiment may be executed by a model training apparatus, which may be implemented in software and/or hardware and is specifically configured in an electronic device with certain data computation capability, where the electronic device may be a client device or a server device. The client device includes, for example, a mobile phone, a tablet computer, a vehicle-mounted terminal, a desktop computer, and the like.
S110, acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring the original image.
The blurred image is an image obtained by blurring an original image. The fuzzy image is used for being provided for the annotation equipment, so that the annotation equipment carries out image annotation based on the fuzzy image, and the privacy information contained in the original image is prevented from being revealed in the annotation process. Illustratively, the blurred image is obtained by blurring the original image by using a gaussian blurring algorithm by the image acquisition device. As another example, the blurred image is obtained by blurring the original image by the image processing device. The image processing device is used for acquiring an original image from the image acquisition device, performing fuzzy processing on the original image to obtain a fuzzy image, and providing the fuzzy image to the model training device. The privacy information in the original image may include specific road information in a road sign, face information of a pedestrian, a license plate number of an automobile, or the like.
And the labeling result is obtained by labeling the fuzzy image by the labeling equipment. The labeling result comprises the position of the fuzzy labeling frame in the fuzzy image and the label of the fuzzy labeling frame. In a specific example, as shown in fig. 1b, the labeling result of the current blurred image includes the position of the blurred labeling frame a and the label of the blurred labeling frame a (for example, the label is a traffic light). The position of the fuzzy label frame A is (x1min, y1min, x1max, y1 max). Specifically, (x1min, y1min) can be regarded as the coordinates of the upper left corner of the blurred labeling frame, and (x1max, y1max) can be regarded as the coordinates of the lower right corner of the blurred labeling frame, so that a rectangular blurred labeling frame can be determined in the blurred image through the position information. Of course, one blurred image may also include a plurality of blurred labeling frames, and each blurred labeling frame corresponds to a label.
The labeling result obtained by the model training device may carry a fuzzy image identifier. The fuzzy image identification is used for matching with the original image to obtain the original image corresponding to the labeling result. Illustratively, the annotation result of the blurred image includes a blurred image identifier a1_ m, which is used to determine the original image matching the current annotation result. The last bit in the label is used to indicate the type of image, m indicates the blurred image, and y indicates the original image. And if the first two bits of the original image identifier and the blurred image identifier are the same, indicating that the blurred image currently being compared is associated with the original image. That is, when the identification of the original image is a1_ y, it can be determined that the original image matches the above-described annotation result.
In the embodiment of the present disclosure, in order to perform model training, the model training device may first receive an annotation result of a blurred image sent by the annotation device. The annotation result is obtained by annotating the blurred image, so that the annotation equipment can be prevented from acquiring the original image, and privacy information in the original image is revealed. In addition, because the blurred image is obtained by blurring the original image, and the main body and the position of the main body contained in the blurred image are the same as those of the original image, both the position of the blurred labeling frame and the label of the blurred labeling frame in the labeling result of the blurred image can represent the labeling result in the original image, and the safety of the private data in the original image is ensured under the condition that the image labeling is not influenced.
Illustratively, the original image includes a road sign, a traffic light, and a pedestrian. The specific road name and the face of the pedestrian contained in the road sign belong to privacy information. In order to avoid the annotation device from acquiring the privacy information in the original image, the image acquisition device may perform a blurring process after acquiring the original image, and send the obtained blurred image to the annotation device for image annotation. In the image labeling process, only the type of the subject (e.g., road sign, traffic light, pedestrian) included in the image needs to be labeled, and the specific information of the subject (e.g., specific road name in the road sign or face of the pedestrian) does not need to be acquired. Although the labeling equipment cannot obtain specific road names and clear face images in the road signs from the fuzzy images, the labeling of the road signs and pedestrians is not affected, and the leakage of privacy information is avoided while the image labeling is realized.
And S120, acquiring an original image.
If the fuzzy image is adopted for model training, the effect of model training is affected. Therefore, after the model training device obtains the labeling result of the blurred image, it needs to further obtain the original image, so as to perform model training based on the labeling results of the original image and the blurred image. The obtained original image carries an original image identifier, and the original image identifier is used for matching the original image with the labeling result. Specifically, the model training device receives an original image sent by the image acquisition device for model training.
And S130, determining the incidence relation between the annotation result and the original image.
After the model training device obtains the annotation result and the original image, the association relationship between the annotation result and the original image needs to be determined. Specifically, the blurred image identifier may be extracted from the labeling result of the blurred image, the original image identifier may be obtained from the original image, and finally, the association relationship between the labeling result and the original image may be determined by comparing the blurred image identifier and the original image identifier.
Illustratively, the identifiers of two original images acquired by the image acquisition device are a1_ y and a2_ y, respectively, and the identifiers of blurred images obtained by blurring the two original images are a1_ m and a2_ m. When the association relationship between the annotation result and the original image is determined, the association relationship between the original image identifier and the blurred image identifier can be determined by comparing the first two bits of the original image identifier and the blurred image identifier. And if the former two digits are the same, determining that the labeling result corresponding to the current blurred image identifier is matched with the original image corresponding to the current original image identifier.
And S140, training the target network model based on the labeling result, the original image and the incidence relation.
In the embodiment of the disclosure, the model training device trains the target network model based on the labeling result of the fuzzy image, the original image and the incidence relation. Specifically, the obtained labeling result and the original image are divided into a plurality of sample groups according to the association relationship. In each sample group, the blurred image corresponding to the annotation result is obtained by blurring the original image in the sample group. Furthermore, original labeling frames can be labeled in the original image associated with the original image according to the positions of the fuzzy labeling frames in the sample group (the fuzzy labeling frames and the original labeling frames are in one-to-one correspondence), and the labeled original image is used as a training sample. Further, the label of the labeling frame is blurred in the labeling result and serves as the label corresponding to the original labeling frame. And finally, training the target network model based on the training samples and the labels.
In a specific example, the labeling results of the original image a1_ y and the blurred image a1_ m are first used as a sample group according to the original image identifier and the blurred image identifier. And the labeling result comprises the position of the first fuzzy labeling frame and the label of the first fuzzy labeling frame, and the position of the second fuzzy labeling frame and the label of the second fuzzy labeling frame. Further, a first original labeling frame and a second original labeling frame are marked in the original image a1_ y according to the position of the first blurred labeling frame and the position of the second blurred labeling frame, respectively (the first blurred labeling frame and the first blurred labeling frame correspond to the same subject, and the second blurred labeling frame correspond to the same subject). Further, the label of the first fuzzy labeling frame is used as the label of the first original labeling frame, and similarly, the label of the second fuzzy labeling frame is used as the label of the second original labeling frame. Finally, training the image detection model based on the original image, the first original labeling frame, the second original labeling frame, the label of the first original labeling frame and the label of the second original labeling frame. Wherein the image detection model is used for detecting at least one subject in the image.
In another specific example, the labeling results of the original image a1_ y and the blurred image a1_ m are first used as a sample group according to the original image identifier and the blurred image identifier. And the labeling result comprises the position of the first fuzzy labeling frame and the label of the first fuzzy labeling frame, and the position of the second fuzzy labeling frame and the label of the second fuzzy labeling frame. Further, the first partial image and the second partial image are clipped in the original image a1_ y according to the position of the first blurred annotation frame and the position of the second blurred annotation frame, respectively (the first blurred annotation frame and the first partial image correspond to the same subject, and the second blurred annotation frame and the second partial image correspond to the same subject). Further, the label of the first fuzzy labeling frame is used as the label of the first local image, and the label of the second fuzzy labeling frame is used as the label of the second local image. Finally, the image classification model is trained according to the first local image, the second local image, the label of the first local image and the label of the second local image. The image classification model is used to determine a category to which the current image belongs, for example, the current image belongs to a traffic light, a road sign, a pedestrian, or the like.
According to the technical scheme of the embodiment of the disclosure, firstly, the labeling result of the fuzzy image and the original image are obtained, the incidence relation between the labeling result and the original image is further determined, and finally, the target network model is trained based on the labeling result, the original image and the incidence relation. And image annotation is carried out based on the fuzzy image, and finally model training is carried out based on the annotation result of the fuzzy image and the original image, so that privacy disclosure in the image annotation process can be avoided while model training is not influenced.
Fig. 2 is a schematic diagram of a model training method in the embodiment of the present disclosure, which is further refined on the basis of the above embodiment, and provides a specific step of determining an association relationship between an annotation result and an original image, and a specific step of training a target network model based on the annotation result, the original image, and the association relationship. A model training method provided by the embodiment of the present disclosure is described below with reference to fig. 2, which includes the following steps:
s210, acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring the original image.
And S220, acquiring an original image.
And S230, extracting the fuzzy image identification from the labeling result.
The blurred image identifier is a unique identifier for representing a blurred image, and the blurred image identifier can be used for determining the association relationship between the blurred image and the original image. Illustratively, the blurred image flag is a1_ m, the corresponding original image flag is a1_ y, and the first two digits of the two flags are the same, which indicates that the blurred image a1_ m is obtained by blurring the original image a1_ y. In the above identifier, the last bit may be used to characterize the type of the image, the image with the last bit m in the identifier belongs to the blurred image, and the image with the last bit y in the identifier belongs to the original image.
In the embodiment of the disclosure, when the annotation device sends the annotation result of the blurred image, the annotation result carries the blurred image identifier. In order to determine the association relationship between the original image and the annotation result, the model training device first needs to extract the blurred image identifier from the annotation result. Illustratively, a blurred image identification a1_ m is extracted from the annotation result.
And S240, extracting an original image identifier from the original image.
The original image identifier is a unique identifier for characterizing an original image, and the original image identifier can be used for determining the association relationship between the blurred image and the original image. Illustratively, the original image identifier is a1_ y, the blurred image identifier is a1_ m, and the first two digits of the two identifiers are the same, which indicates that the two identifiers match with each other, that is, the blurred image a1_ m is obtained by blurring the original image a1_ y.
And S250, determining the incidence relation between the annotation result and the original image according to the fuzzy image identifier and the original image identifier.
In the embodiment of the disclosure, after the blurred image identifier and the original image identifier are extracted, the blurred image identifier and the original image identifier are compared pairwise to determine an association relationship between the labeling result and the original image. Specifically, the blurred image identifier corresponding to the annotation result may be sequentially used as the current blurred image identifier according to the obtaining sequence, and then the current blurred image identifier is respectively compared with each original image identifier to determine the association relationship between the annotation result and the original image. Based on the incidence relation between the annotation result and the original image, the annotation result can be mapped to the original image from the fuzzy image, so that the annotation result of the original image can be obtained while the annotation equipment does not need to acquire the original image, and the security of the privacy information in the original image is improved.
In a specific example, for the obtained 3 sets of annotation results, the extracted blurred image identifiers are a1_ m, a2_ m, and a3_ m, respectively. For the acquired 3 original images, the extracted original image identifications are a3_ y, a1_ y and a2_ y, respectively. The blurred image flag a1_ m may be compared with each original image flag in the acquisition order until the original image flag a1_ y with the same first two bits is matched. Similarly, the blurred image identifications a2_ m and a3_ m are sequentially compared with each original image to obtain pairwise matched blurred image identifications and original image identifications. And finally, determining the incidence relation between the annotation result and the original image according to the matching relation between the fuzzy image identifier and the original image identifier.
And S260, marking the original labeling frame in the associated original image according to the position of the fuzzy labeling frame in the labeling result.
In the embodiment of the present disclosure, after determining the association relationship between the annotation result and the original image, the original mark frame may be marked in the associated original image according to the position of the fuzzy mark frame in the annotation result. And determining the image marked with the original mark frame as a model training sample. And the original marking frame in the original image corresponds to the fuzzy marking frame in the marking result one by one.
In a specific example, the labeling result includes a fuzzy labeling box a and a fuzzy labeling box B. The position of the fuzzy labeling frame A in the fuzzy image is (x1min, y1min, x1max, x1max), and the label of the fuzzy labeling frame A is a pedestrian; the position of the fuzzy labeling frame B in the fuzzy image is (x2min, y2min, x2max, x2max), and the label of the fuzzy labeling frame B is a road sign. The original annotation frame a can be determined in the original image according to the position information of the blurred annotation frame a, where the specific position is (x1min, y1min, x1max, x1 max). Similarly, according to the position information of the fuzzy marking frame B, the original marking frame B is determined in the original image, and the specific position is (x2min, y2min, x2max, x2 max). It is worth noting that, since the blurred image is obtained by blurring the original image, the size of the original image is not changed in the blurring process, and therefore, the corresponding original labeling frame can be directly marked in the original image according to the position of the blurred labeling frame in the blurred image.
And S270, taking the label of the fuzzy labeling frame in the labeling result as the label of the associated original labeling frame.
When the model training is performed, besides the training samples, labels corresponding to the samples are also required. Therefore, after the original labeling frame is marked in the original image, the label of the blurred labeling frame in the labeling result is used as the label of the associated original labeling frame. The position and the label of the original annotation frame in the original image are determined in the above mode, so that the annotation result of the fuzzy image is mapped to the original image corresponding to the fuzzy image, the model training effect is ensured, and meanwhile, the privacy disclosure of the original image in the image annotation process can be avoided.
Illustratively, the labeling result includes a fuzzy labeling box a and a fuzzy labeling box B. Wherein, the label of the fuzzy marking frame A is a pedestrian; the label of the fuzzy label box B is a road sign. The original labeling frame a can be determined in the original image according to the position information of the fuzzy labeling frame a, and the label of the original labeling frame a is also determined as the pedestrian. Similarly, according to the position information of the fuzzy marking frame B, the original marking frame B is determined in the original image, and the label of the original marking frame B is also determined as the road sign.
S280, training a target network model based on the original image, the original labeling frame and the label of the original labeling frame; the target network model is an image detection model.
In the embodiment of the disclosure, the model training device finally trains the target network model based on the original image, at least one original labeling frame in the original image, and the label of each original labeling frame. In the model training process, the labeling equipment does not need to acquire the original image, so that the leakage of privacy information in the image labeling process is avoided, and the normal operation of model training is ensured. In the embodiment of the present disclosure, the target network model is an image detection model, and the trained image detection model may be used to detect one or more target subjects in an image. The target subject may be a pedestrian, a street lamp, a vehicle, a road sign, or the like included in the image.
Optionally, the original image and the blurred image are determined by the acquisition device; determining the labeling result of the fuzzy image by the labeling equipment; the target network model is trained by the model training device.
In this optional embodiment, the original image is acquired by the acquisition device, and the blurred image is obtained by performing a blurring process on the original image by the acquisition device. The labeling result of the blurred image is determined by the labeling device. The target network model is trained by the model training device. In the scheme, only the acquisition equipment and the model training equipment can acquire the original image. The labeling device labels based on the fuzzy image, and avoids the privacy information of the original image from being revealed in the labeling process.
According to the technical scheme of the embodiment of the invention, firstly, the labeling result of the fuzzy image and the original image are obtained, then the label of the fuzzy image and the label of the original image are obtained, the incidence relation between the labeling result and the original image is determined according to the label of the fuzzy image and the label of the original image, further, the original labeling frame is labeled in the associated original image according to the position of the fuzzy labeling frame in the labeling result, the label of the fuzzy labeling frame in the labeling result is used as the label of the associated original labeling frame, and finally, the target network model is trained based on the original image, the label of the original labeling frame and the label of the original labeling frame. The model training method provided by the embodiment of the disclosure not only avoids the disclosure of privacy information in the image labeling process, but also can meet the labeling requirement of the model training process on the original image.
Fig. 3 is a schematic diagram of a model training method in the embodiment of the present disclosure, which is further refined on the basis of the above embodiment and provides specific steps for training a target network model based on an annotation result, an original image, and an association relationship. A model training method provided by the embodiment of the present disclosure is described below with reference to fig. 3, which includes the following steps:
s310, acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring the original image.
And S320, acquiring an original image.
S330, determining the incidence relation between the annotation result and the original image.
S340, selecting a local image matched with the fuzzy annotation frame from the associated original image according to the position of the fuzzy annotation frame in the annotation result.
In the embodiment of the disclosure, another model training method is provided, and after the incidence relation between the annotation result and the original image is determined, a local image matched with the fuzzy annotation frame can be acquired in the associated original image according to the position of the fuzzy annotation frame in the annotation result. Specifically, a local image corresponding to the fuzzy labeling frame can be cut out from the original image according to the position of the fuzzy labeling frame, and the local image is used as a training sample.
In a specific example, the labeling result includes a fuzzy labeling box a and a fuzzy labeling box B. The position of the fuzzy labeling frame A in the fuzzy image is (x1min, y1min, x1max, x1max), and the label of the fuzzy labeling frame A is a pedestrian; the position of the fuzzy labeling frame B in the fuzzy image is (x2min, y2min, x2max, x2max), and the label of the fuzzy labeling frame B is a road sign. The local image a can be cropped in the original image according to the position information of the fuzzy annotation frame a, and the specific position is (x1min, y1min, x1max, x1 max). Similarly, according to the position information of the fuzzy marking frame B, the local image B is cut out from the original image, and the specific position is (x2min, y2min, x2max, x2 max). The partial image a and the partial image B may be used as training samples.
And S350, taking the label of the fuzzy labeling frame in the labeling result as the label of the local image.
When the model training is performed, besides the training samples, labels corresponding to the samples are also required. Therefore, after the partial image is cropped in the original image, the label of the fuzzy labeling frame in the labeling result is used as the label of the associated partial image. The local images and the labels of the local images in the original image are determined in the above mode, so that the labeling result of the fuzzy image is mapped to the original image corresponding to the fuzzy image, the model training effect is ensured, and meanwhile, the privacy leakage in the original image is avoided.
Illustratively, the labeling result includes a fuzzy labeling box a and a fuzzy labeling box B. The label of the fuzzy marking frame A is a pedestrian, and the label of the fuzzy marking frame B is a road sign. The local image A can be cut out from the original image according to the position information of the fuzzy marking frame A, and the label of the local image A is also determined as the pedestrian. Similarly, according to the position information of the fuzzy marking frame B, the local image B is cut out from the original image, and the label of the local image B is also determined as the road sign.
S360, training a target network model based on the local images and the labels of the local images; the target network model is an image classification model.
In the embodiment of the disclosure, the model training device finally takes the local image obtained by cutting the original image as a training sample, and trains the target network model by combining the label of the local image. In the model training process, the annotation result of the fuzzy image can be mapped to the original image, the annotation result of the original image is indirectly obtained, the circulation times of the original image are reduced, and the privacy information of the original image is prevented from being revealed in the image annotation process. In the embodiment of the present disclosure, the target network model is an image classification model, and the trained image classification model may be used to determine the category to which the image belongs.
According to the technical scheme of the embodiment of the disclosure, firstly, a labeling result of a fuzzy image and an original image are obtained, an incidence relation between the labeling result and the original image is determined, further, a local image matched with a fuzzy labeling frame is selected from the associated original image according to the position of the fuzzy labeling frame in the labeling result, a label of the fuzzy labeling frame in the labeling result is used as a label of the local image, and finally, a target network model is trained on the basis of the labels of the local image and the local image, so that a person at a labeling equipment end labels the fuzzy image while ensuring a model training effect, and accordingly, the leakage of privacy information is avoided.
Fig. 4 is a flowchart of a model training method disclosed in an embodiment of the present disclosure, which may be applied to a case of performing image annotation based on a blurred image. The method of this embodiment may be executed by a model training apparatus, which may be implemented in software and/or hardware, and is specifically configured in an electronic device with certain data operation capability, where the electronic device may be a client device or a server device, and the client device may be a mobile phone, a tablet computer, a vehicle-mounted terminal, a desktop computer, and the like.
And S410, blurring the original image to obtain a blurred image.
In the embodiment of the disclosure, the image acquisition device can acquire the original image according to the acquisition instruction. Further, in order to avoid the leakage of the privacy information in the original image, the original image is firstly subjected to a blurring process to obtain a blurred image. And the fuzzy image is used for sending to the image annotation equipment for image annotation.
The blurring processing of the original image comprises at least one of the following modes:
carrying out fuzzy processing on the original image by adopting a Gaussian fuzzy algorithm;
carrying out fuzzy processing on the original image by adopting a frame fuzzy algorithm;
and carrying out fuzzy processing on the original image by adopting a double fuzzy algorithm.
In the embodiment of the disclosure, the blurring processing on the original image may be performed by using a gaussian blurring algorithm, a block blurring algorithm, a double blurring algorithm, or the like, to process the original image to obtain a blurred image. The original image is subjected to fuzzy processing in the image acquisition equipment, so that the privacy disclosure problem of the image labeling equipment in the image labeling process can be avoided. In addition, by labeling the blurred image, the labeling result can be finally mapped to the original image directly, and the original image is used as a training sample, so that the model training effect is ensured.
And S420, sending the fuzzy image to an annotation device, wherein the fuzzy image is used for indicating the annotation device to annotate the fuzzy image, so as to obtain an annotation result of the fuzzy image.
In the embodiment of the disclosure, the fuzzy image is sent to the annotation device, which is used for instructing the annotation device to perform annotation based on the fuzzy image, so as to obtain an annotation result of the fuzzy image. The fuzzy image is obtained by blurring the original image, and the size of the fuzzy image and the size of the original image are the same, so that the labeling result of the fuzzy image can be mapped to the original image to obtain the labeling result of the original image. According to the image labeling mode, an original image does not need to be sent to labeling equipment, privacy information is prevented from being leaked in the labeling process, and meanwhile, the model training equipment can be ensured to obtain the labeling result of the original image, and the model training effect is ensured.
S430, sending the original image to the model training device for instructing the model training device to execute the following steps: determining an incidence relation between the annotation result and the original image; and training the target network model based on the labeling result, the original image and the incidence relation.
In the embodiment of the present disclosure, in order to ensure the model training effect, an original image is sent to the model training device, and is used to instruct the model training device to perform the following operations: determining an incidence relation between the annotation result and the original image; and training the target network model based on the labeling result, the original image and the incidence relation. By the aid of the method, the privacy information in the original image can be prevented from being revealed by the image labeling equipment, and meanwhile, the model training effect is guaranteed.
According to the technical scheme of the embodiment, the image acquisition equipment firstly carries out fuzzy processing on the acquired original image to obtain a fuzzy image, further sends the fuzzy image to the labeling equipment for indicating the labeling equipment to label the fuzzy image to obtain a fuzzy labeling result, finally sends the original image to the model training equipment, and indicates the model training equipment to carry out model training. On the one hand, the labeling equipment labels based on the fuzzy image, so that the leakage of privacy information in the original image is avoided, and on the other hand, the original image is sent to the model training equipment, so that the model training effect can be ensured.
Fig. 5 is a diagram of a model training scenario disclosed in an embodiment of the present disclosure.
And S510, collecting an original image by image collection equipment, and carrying out fuzzy processing on the original image to obtain a fuzzy image.
S520, the image acquisition equipment sends the fuzzy image to the annotation equipment and sends the original image to the model training equipment.
And S530, the labeling equipment labels the fuzzy image to obtain a labeling result of the fuzzy image, and sends the labeling result to the model training equipment.
And S540, the model training equipment receives the labeling result of the fuzzy image sent by the labeling equipment and receives the original image sent by the image acquisition equipment.
And S550, determining the incidence relation between the annotation result and the original image by the model training equipment.
And S560, training the target network model by the model training equipment based on the labeling result, the original image and the incidence relation.
By sending the fuzzy image to the labeling equipment, the labeling equipment carries out image labeling based on the fuzzy image, so that the privacy information leakage in the image labeling process is avoided, and finally, the labeling result and the original image are sent to the model training equipment, so that the model training effect is ensured.
According to an embodiment of the present disclosure, fig. 6 is a structural diagram of a model training device in an embodiment of the present disclosure, and the embodiment of the present disclosure is applicable to a case of performing image annotation based on a blurred image. The device is realized by software and/or hardware and is specifically configured in electronic equipment with certain data operation capacity.
A model training apparatus 600 as shown in fig. 6 comprises: an annotation result acquisition module 610, an original image acquisition module 620, an association relation determination module 630 and a model training module 640; wherein the content of the first and second substances,
the annotation result acquisition module 610 is configured to acquire an annotation result of the blurred image; the blurred image is obtained by blurring an original image;
an original image obtaining module 620, configured to obtain an original image;
an association relation determining module 630, configured to determine an association relation between the annotation result and the original image;
and the model training module 640 is configured to train a target network model based on the labeling result, the original image, and the association relationship.
According to the technical scheme of the embodiment of the disclosure, firstly, the labeling result of the fuzzy image and the original image are obtained, the incidence relation between the labeling result and the original image is further determined, and finally, the target network model is trained based on the labeling result, the original image and the incidence relation. And image annotation is carried out based on the fuzzy image, and finally model training is carried out based on the annotation result of the fuzzy image and the original image, so that privacy disclosure in the image annotation process can be avoided while model training is not influenced.
Further, the association relation determining module 630 includes:
the fuzzy image identification acquisition unit is used for extracting a fuzzy image identification from the labeling result;
an original image identifier obtaining unit, configured to extract an original image identifier from the original image;
and the incidence relation determining unit is used for determining the incidence relation between the annotation result and the original image according to the fuzzy image identifier and the original image identifier.
Further, the model training module 640 includes:
the original marking frame marking unit is used for marking the original marking frame in the associated original image according to the position of the fuzzy marking frame in the marking result;
an original label determining unit, configured to use a label of the blurred labeling frame in the labeling result as a label of the associated original labeling frame;
the detection model training unit is used for training a target network model based on the original image, the original labeling frame and the label of the original labeling frame; the target network model is an image detection model.
Further, the model training module 640 includes:
the local image acquisition unit is used for selecting a local image matched with the fuzzy annotation frame from the associated original image according to the position of the fuzzy annotation frame in the annotation result;
a local label determining unit, configured to use a label of a fuzzy label frame in the labeling result as a label of the local image;
the classification model training unit is used for training a target network model based on the local images and the labels of the local images; the target network model is an image classification model.
Further, the original image and the blurred image are determined by an acquisition device; the labeling result of the fuzzy image is determined by labeling equipment; the target network model is trained by a model training device.
The model training device provided by the embodiment of the disclosure can execute the model training method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
According to an embodiment of the present disclosure, fig. 7 is a structural diagram of a model training device in an embodiment of the present disclosure, and the embodiment of the present disclosure is applicable to a case of performing image annotation based on a blurred image. The device is realized by software and/or hardware and is specifically configured in electronic equipment with certain data operation capacity.
A model training apparatus 700 as shown in fig. 7 includes: a blurred image acquisition module 710, a blurred image transmission module 720 and an original image transmission module 730; wherein the content of the first and second substances,
a blurred image obtaining module 710, configured to perform blurring processing on the original image to obtain a blurred image;
a blurred image sending module 720, configured to send the blurred image to an annotation device, and instruct the annotation device to label the blurred image to obtain a labeling result of the blurred image;
a raw image sending module 730, configured to send a raw image to a model training apparatus, and instruct the model training apparatus to perform the following: determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation.
According to the technical scheme of the embodiment, the image acquisition equipment firstly carries out fuzzy processing on the acquired original image to obtain a fuzzy image, further sends the fuzzy image to the labeling equipment for indicating the labeling equipment to label the fuzzy image to obtain a fuzzy labeling result, finally sends the original image to the model training equipment, and indicates the model training equipment to carry out model training. On the one hand, the labeling equipment labels based on the fuzzy image, so that the leakage of privacy information in the original image is avoided, and on the other hand, the original image is sent to the model training equipment, so that the model training effect can be ensured.
Further, the blurred image obtaining module 710 is configured to perform at least one of the following:
carrying out fuzzy processing on the original image by adopting a Gaussian fuzzy algorithm;
carrying out fuzzy processing on the original image by adopting a frame fuzzy algorithm;
and carrying out fuzzy processing on the original image by adopting a double fuzzy algorithm.
The model training device provided by the embodiment of the disclosure can execute the model training method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 performs the various methods and processes described above, such as the model training method. For example, in some embodiments, the model training method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the model training method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the model training method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A model training method, comprising:
acquiring a labeling result of the fuzzy image; the blurred image is obtained by blurring an original image;
acquiring an original image;
determining the incidence relation between the labeling result and the original image;
and training a target network model based on the labeling result, the original image and the incidence relation.
2. The method of claim 1, wherein determining the association between the annotation result and the original image comprises:
extracting fuzzy image identification from the labeling result;
extracting an original image identifier from the original image;
and determining the incidence relation between the labeling result and the original image according to the fuzzy image identifier and the original image identifier.
3. The method of claim 1, wherein training a target network model based on the annotation result, the original image, and the incidence relation comprises:
marking an original marking frame in the associated original image according to the position of the fuzzy marking frame in the marking result;
taking the label of the fuzzy labeling frame in the labeling result as the label of the associated original labeling frame;
training a target network model based on the original image, the original labeling frame and the label of the original labeling frame; the target network model is an image detection model.
4. The method of claim 1, wherein training a target network model based on the annotation result, the original image, and the incidence relation comprises:
selecting a local image matched with the fuzzy annotation frame from the associated original image according to the position of the fuzzy annotation frame in the annotation result;
taking the label of the fuzzy labeling frame in the labeling result as the label of the local image;
training a target network model based on the local images and the labels of the local images; the target network model is an image classification model.
5. The method of claim 1, wherein the original image and the blurred image are determined by an acquisition device; the labeling result of the fuzzy image is determined by labeling equipment; the target network model is trained by a model training device.
6. A model training method, comprising:
carrying out fuzzy processing on the original image to obtain a fuzzy image;
sending the fuzzy image to an annotation device, and indicating the annotation device to annotate the fuzzy image to obtain an annotation result of the fuzzy image;
sending the raw image to a model training device for instructing the model training device to perform: determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation.
7. The method of claim 6, wherein blurring the original image comprises at least one of:
carrying out fuzzy processing on the original image by adopting a Gaussian fuzzy algorithm;
carrying out fuzzy processing on the original image by adopting a frame fuzzy algorithm;
and carrying out fuzzy processing on the original image by adopting a double fuzzy algorithm.
8. A model training apparatus comprising:
the annotation result acquisition module is used for acquiring an annotation result of the fuzzy image; the blurred image is obtained by blurring an original image;
the original image acquisition module is used for acquiring an original image;
the incidence relation determining module is used for determining the incidence relation between the annotation result and the original image;
and the model training module is used for training a target network model based on the labeling result, the original image and the incidence relation.
9. The apparatus of claim 8, wherein the association determination module comprises:
the fuzzy image identification acquisition unit is used for extracting a fuzzy image identification from the labeling result;
an original image identifier obtaining unit, configured to extract an original image identifier from the original image;
and the incidence relation determining unit is used for determining the incidence relation between the annotation result and the original image according to the fuzzy image identifier and the original image identifier.
10. The apparatus of claim 8, wherein the model training module comprises:
the original marking frame marking unit is used for marking the original marking frame in the associated original image according to the position of the fuzzy marking frame in the marking result;
an original label determining unit, configured to use a label of the blurred labeling frame in the labeling result as a label of the associated original labeling frame;
the detection model training unit is used for training a target network model based on the original image, the original labeling frame and the label of the original labeling frame; the target network model is an image detection model.
11. The apparatus of claim 8, wherein the model training module comprises:
the local image acquisition unit is used for selecting a local image matched with the fuzzy annotation frame from the associated original image according to the position of the fuzzy annotation frame in the annotation result;
a local label determining unit, configured to use a label of a fuzzy label frame in the labeling result as a label of the local image;
the classification model training unit is used for training a target network model based on the local images and the labels of the local images; the target network model is an image classification model.
12. The apparatus of claim 8, wherein the original image and the blurred image are determined by an acquisition device; the labeling result of the fuzzy image is determined by labeling equipment; the target network model is trained by a model training device.
13. A model training apparatus comprising:
the fuzzy image acquisition module is used for carrying out fuzzy processing on the original image to obtain a fuzzy image;
the fuzzy image sending module is used for sending the fuzzy image to the labeling equipment and indicating the labeling equipment to label the fuzzy image to obtain a labeling result of the fuzzy image;
an original image sending module, configured to send an original image to a model training device, and instruct the model training device to perform the following: determining the incidence relation between the labeling result and the original image; and training a target network model based on the labeling result, the original image and the incidence relation.
14. The apparatus of claim 13, wherein the blurred image acquisition module is configured to perform at least one of:
carrying out fuzzy processing on the original image by adopting a Gaussian fuzzy algorithm;
carrying out fuzzy processing on the original image by adopting a frame fuzzy algorithm;
and carrying out fuzzy processing on the original image by adopting a double fuzzy algorithm.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the model training method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the model training method according to any one of claims 1-7.
17. A computer program product comprising a computer program/instructions which, when executed by a processor, implement the model training method according to any one of claims 1-7.
CN202111627016.1A 2021-12-28 2021-12-28 Model training method, device, apparatus, medium, and program product Withdrawn CN114330543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111627016.1A CN114330543A (en) 2021-12-28 2021-12-28 Model training method, device, apparatus, medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111627016.1A CN114330543A (en) 2021-12-28 2021-12-28 Model training method, device, apparatus, medium, and program product

Publications (1)

Publication Number Publication Date
CN114330543A true CN114330543A (en) 2022-04-12

Family

ID=81015517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111627016.1A Withdrawn CN114330543A (en) 2021-12-28 2021-12-28 Model training method, device, apparatus, medium, and program product

Country Status (1)

Country Link
CN (1) CN114330543A (en)

Similar Documents

Publication Publication Date Title
CN113378833B (en) Image recognition model training method, image recognition device and electronic equipment
CN113191256B (en) Training method and device of lane line detection model, electronic equipment and storage medium
CN112966599B (en) Training method of key point recognition model, key point recognition method and device
CN113159091A (en) Data processing method and device, electronic equipment and storage medium
CN113591580B (en) Image annotation method and device, electronic equipment and storage medium
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
CN113963186A (en) Training method of target detection model, target detection method and related device
CN114238790A (en) Method, apparatus, device and storage medium for determining maximum perception range
CN114419493A (en) Image annotation method and device, electronic equipment and storage medium
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN113344121B (en) Method for training a sign classification model and sign classification
US20220390249A1 (en) Method and apparatus for generating direction identifying model, device, medium, and program product
CN113869317A (en) License plate recognition method and device, electronic equipment and storage medium
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN114330543A (en) Model training method, device, apparatus, medium, and program product
CN115761698A (en) Target detection method, device, equipment and storage medium
CN113627526A (en) Vehicle identification recognition method and device, electronic equipment and medium
CN113033431A (en) Optical character recognition model training and recognition method, device, equipment and medium
CN113807236B (en) Method, device, equipment, storage medium and program product for lane line detection
CN113361524B (en) Image processing method and device
CN114299522B (en) Image recognition method device, apparatus and storage medium
CN114092874B (en) Training method of target detection model, target detection method and related equipment thereof
CN114529768B (en) Method, device, electronic equipment and storage medium for determining object category
CN113963322B (en) Detection model training method and device and electronic equipment
CN117615363B (en) Method, device and equipment for analyzing personnel in target vehicle based on signaling data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220412

WW01 Invention patent application withdrawn after publication