CN113010721A - Picture auditing method and device, electronic equipment and storage medium - Google Patents

Picture auditing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113010721A
CN113010721A CN202110287468.3A CN202110287468A CN113010721A CN 113010721 A CN113010721 A CN 113010721A CN 202110287468 A CN202110287468 A CN 202110287468A CN 113010721 A CN113010721 A CN 113010721A
Authority
CN
China
Prior art keywords
model
risk
picture
sub
auditing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110287468.3A
Other languages
Chinese (zh)
Other versions
CN113010721B (en
Inventor
邓远达
张言
胡旭
刘星
梁晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110287468.3A priority Critical patent/CN113010721B/en
Publication of CN113010721A publication Critical patent/CN113010721A/en
Application granted granted Critical
Publication of CN113010721B publication Critical patent/CN113010721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure discloses a picture auditing method and device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the aspects of computer vision and deep learning. The specific implementation scheme is as follows: inputting the picture to be audited into each general submodel in the general audit model to obtain the audit result output by each general submodel; determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; inputting the picture to be audited into each risk submodel in the at least one risk submodel to obtain an audit result output by each risk submodel; and determining the final checking result of the picture to be checked based on the checking result output by each risk sub-model. According to the embodiment of the application, the efficiency of picture auditing can be effectively improved, and meanwhile, the accuracy of picture auditing can be ensured.

Description

Picture auditing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and further relates to computer vision and deep learning, and in particular, to a method and an apparatus for image verification, an electronic device, and a storage medium.
Background
With the development and progress of science and technology, in the wind control work such as picture content auditing, the mainstream auditing algorithm has become to audit the picture by Artificial Intelligence (AI) through a computer vision model. However, the performance of a single computer vision model is often very limited, and in actual work, a plurality of models can be split according to different reasons such as risk types, and each model can only identify part of risks.
By adopting the picture auditing method in the prior art, auditing efficiency and accuracy cannot be achieved at the same time.
Disclosure of Invention
The application provides a picture auditing method and device, electronic equipment and a storage medium, which can effectively improve the picture auditing efficiency and ensure the picture auditing accuracy.
According to a first aspect of the present application, there is provided a picture auditing method, the method including:
inputting the picture to be audited into each general submodel in the general audit model to obtain the audit result output by each general submodel; wherein the universal audit model comprises at least one universal submodel;
determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model;
inputting the picture to be audited into each risk submodel of the at least one risk submodel to obtain an audit result output by each risk submodel;
and determining the final audit result of the picture to be audited based on the audit result output by each risk sub-model.
According to a second aspect of the present application, there is provided a picture auditing apparatus, the apparatus including: the system comprises a general auditing module, a determination model, a risk auditing module and a judging module; wherein the content of the first and second substances,
the universal auditing module is used for inputting the picture to be audited into each universal submodel in the universal auditing model to obtain the auditing result output by each universal submodel; wherein the universal audit model comprises at least one universal submodel;
the determining module is used for determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model;
the risk auditing module is used for inputting the picture to be audited into each risk submodel of the at least one risk submodel to obtain auditing results output by each risk submodel;
and the judging module is used for judging the final auditing result of the picture to be audited based on the auditing result output by each risk sub-model.
According to a third aspect of the present application, there is provided an electronic device comprising:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the picture auditing method according to any embodiment of the present application.
According to a fourth aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements a picture auditing method according to any embodiment of the present application.
According to a fifth aspect of the present application, there is provided a computer program product, which when executed by a computer device implements the picture auditing method according to any embodiment of the present application.
According to the technical scheme provided by the application, the picture auditing efficiency can be effectively improved, and meanwhile, the picture auditing accuracy can be ensured.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a first flowchart of a picture auditing method according to an embodiment of the present application;
fig. 2 is a second flow chart of the picture auditing method provided in the embodiment of the present application;
fig. 3 is a third flow chart of a picture auditing method provided in the embodiment of the present application;
FIG. 4 is a schematic structural diagram of a picture auditing framework provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a picture auditing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a picture auditing method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a first flowchart of a picture auditing method provided in an embodiment of the present application, where the method may be executed by a picture auditing apparatus or an electronic device, where the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the picture auditing method may include the following steps:
s101, inputting a picture to be audited into each general sub-model in the general audit model to obtain audit results output by each general sub-model; wherein the universal audit model comprises at least one universal submodel.
In this step, the electronic device may input the picture to be audited into each general submodel in the general audit model, to obtain the audit result output by each general submodel; wherein the universal audit model comprises at least one universal submodel. Specifically, it is assumed that the general audit model may include N general submodels, which are: general submodel 1, general submodels 2, … and general submodel N; wherein N is a natural number greater than or equal to 1; the electronic device can input the picture to be audited to the general submodel 1, the general submodels 2 and … and the general submodel N respectively to obtain the audit result output by the general submodel 1, the audit result output by the general submodel 2, the audit result output by … and the audit result output by the general submodel N.
S102, determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model.
In this step, the electronic device may determine, according to the audit result output by each general sub-model, at least one risk sub-model corresponding to the picture to be audited in the risk audit model; wherein the risk auditing model comprises at least one risk sub-model. Specifically, the general audit model in the embodiment of the present application includes, but is not limited to, at least one of the following general submodels: cartoon/true submodel, person detection submodel, human body attribute submodel, character recognition submodel.
S103, inputting the picture to be audited into each risk sub-model of the at least one risk sub-model, and obtaining the auditing result output by each risk sub-model.
In this step, the electronic device may input the picture to be audited into each risk sub-model of the at least one risk sub-model, so as to obtain the audit result output by each risk sub-model. Specifically, it is assumed that the number of risk submodels corresponding to the picture to be audited in the risk audit model is M, and the number of the risk submodels is respectively risk submodel 1, risk submodels 2 and …, and risk submodel M; wherein M is a natural number of 1 or more. The electronic equipment can respectively input the pictures to be audited into the risk sub-model 1, the risk sub-models 2 and … and the risk sub-model M to obtain the auditing result output by the risk sub-model 1; the auditing result output by the risk sub-model 2; …, respectively; and outputting an auditing result by the risk sub-model M.
And S104, determining the final auditing result of the picture to be audited based on the auditing results output by the risk sub-models.
In this step, the electronic device may determine a final audit result of the picture to be audited based on the audit result output by each risk sub-model. Specifically, if at least two audit results in the audit results output by each risk sub-model meet the preset merging condition, the electronic device may merge at least two audit results in the audit results output by each risk sub-model to obtain a merged audit result; and then, based on the combined audit result, determining a final audit result of the picture to be audited.
According to the picture auditing method provided by the embodiment of the application, a picture to be audited is input into each general submodel in a general auditing model, and auditing results output by each general submodel are obtained; then determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; inputting the picture to be audited into each risk submodel in the at least one risk submodel to obtain the audition result output by each risk submodel; and finally, determining the final audit result of the picture to be audited based on the audit result output by each risk sub-model. That is to say, the method and the device can audit the picture based on the model of the hierarchical structure, and take the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, reasonably used models of the audit risk can be screened out, so that the number of the models of the audit risk can be reduced, and the audit time is further reduced. In the existing image auditing method, one image needs to pass through a plurality of risk models in sequence, and then the calculation results of all the risk models are summarized and audited to obtain the final risk identification result. Because the technical means of auditing the pictures by using the model based on the hierarchical structure is adopted, the technical problems that the auditing efficiency and the accuracy rate are often not obtained at the same time, so that the auditing efficiency is reduced, and the technical problems that the confidence coefficient of an output problem label is not high or too high due to insufficient model accuracy, so that the leakage of a risk picture or the mistaken killing of a risk-free picture is caused, and the risk is caused to the online service are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example two
Fig. 2 is a second flow chart of the picture auditing method provided in the embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 2, the picture auditing method may include the following steps:
s201, inputting a picture to be audited into each general sub-model in the general audit model to obtain audit results output by each general sub-model; wherein the universal audit model comprises at least one universal submodel.
S202, determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model.
In this step, the electronic device may determine, according to the audit result output by each general sub-model, at least one risk sub-model corresponding to the picture to be audited in the risk audit model; wherein the risk auditing model comprises at least one risk sub-model. Specifically, if at least one of the general auditing sub-models detects that the image to be audited has the figure information, the electronic device may determine that the risk sub-model corresponding to the image to be audited in the risk auditing model is a model associated with the figure information; or, if at least one general sub-model in the general auditing sub-models detects that the text information exists in the image to be audited, the electronic equipment can determine that the risk sub-model corresponding to the image to be audited in the risk auditing model is a model associated with the text information; or, if at least one of the general auditing sub-models detects that the picture to be audited is a cartoon picture or a real picture, the electronic device may determine that the risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the cartoon picture or a model associated with the real picture.
When the risk pictures are audited, most risks need to be antecedent in content composition. For example, the picture low-custom risks, the sensitive figure risks and the like are related to people, and people exist in the picture; illegal game name recognition, infringement risk card name recognition and the like are related to characters, and the characters exist in pictures. The advance conditions can be summarized and concluded from methods such as daily experience and law and regulation analysis, and then can be abstracted into a model solution method of a technical level as a bottom model in the overall auditing process. Specifically, the electronic device may perform the following auditing on the picture to be audited through the general auditing model: 1) and (3) checking whether people exist in the picture as a bottom layer label: for the risks such as low-colloquial class and infringement class, if no person exists in the picture, the risks of the classes do not exist, and therefore the human body detection model can be used as the bottom layer model. Some classical human body detection algorithms, such as models of HOG + Adaboost, RCNN, Yolo, etc., can detect human bodies; some face detection algorithms, such as MTCNN, S3FD, etc. models may also be used. 2) Checking whether characters exist in the picture as bottom layer labels: for risks such as picture false exaggeration and illegal games, if the pictures have no characters, the risks cannot exist, and therefore the character detection model can be used as an underlying model. Some classical OCR algorithms, such as EAST, textbox + +, etc., can recognize the presence or absence of text and textual content in an image. 3) Checking whether the picture is a cartoon picture or a real picture as a bottom layer label: for the construction of an actual risk model, the risk of cartoons is often constructed independently, such as the risk of cartoon vulgar, the risk of sensitive figures and cartoons, and the like; if the picture is a real picture, the risks cannot be caused, so that the classification model can be used for distinguishing the picture into a cartoon picture or a real picture as a bottom layer model. Some classical classification algorithms, such as ResNet, VGG, etc., can recognize images as cartoon pictures or real pictures.
And extracting the bottom layer label and abstracting the bottom layer label into a technical solution to construct a bottom layer model, wherein the bottom layer model is a universal auditing model. For a particular risk type, the characteristics of that type may not be included in a common audit tag at all; on the contrary, if the result of the general audit model contains a certain general audit tag and the risk type corresponding to a risk audit model does not have the characteristics of the tag at all, the material can be directly judged that the picture to be audited does not have the corresponding risk without being identified by the risk audit model, so that the overall picture audit time is reduced.
S203, according to the auditing result output by each general sub-model, finely adjusting at least one risk sub-model corresponding to the picture to be audited in the risk auditing model to obtain at least one finely adjusted risk sub-model corresponding to the picture to be audited in the risk auditing model.
In this step, the electronic device may perform fine adjustment on at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model, so as to obtain at least one fine-adjusted risk sub-model corresponding to the picture to be audited in the risk audit model. In particular, the generic audit model output specific tags can be used as a fine-tuning of the output results of the risk audit model. When a final risk identification result is obtained, the current main practice is to compare the confidence of a specific tag in the result of the audit model with a predetermined threshold value, and further determine whether a risk exists. However, due to problems such as model accuracy, the determination result is wrong sometimes because the confidence is insufficient or exceeds the threshold, and finally the result is leaked or killed by mistake at the service end. However, labels of some general audit models and labels of some risk audit models often have strong correlation, and based on the relationship of the strong correlation, the embodiment of the application can finely adjust the result of the audit risk models according to the result output by the general audit models to jointly obtain the final risk identification result, thereby improving the accuracy of the audit system. Except that the labels of the general auditing model have strong correlation with the labels of some risk auditing models, the occurrence of some label combinations can also be mutual exclusion, and special conditions such as risk calculation and the like can occur simultaneously; when such a tag combination occurs, the work of post-processing the result, such as comparing confidence degrees, can not be performed any more, so as to give a final risk identification result.
S204, inputting the picture to be audited into each risk sub-model in the corresponding at least one risk sub-model, and obtaining the auditing result output by each risk sub-model.
And S205, determining a final auditing result of the picture to be audited based on the auditing results output by the risk sub-models.
According to the picture auditing method provided by the embodiment of the application, a picture to be audited is input into each general submodel in a general auditing model, and auditing results output by each general submodel are obtained; then determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; inputting the picture to be audited into each risk submodel in the at least one risk submodel to obtain the audition result output by each risk submodel; and finally, determining the final audit result of the picture to be audited based on the audit result output by each risk sub-model. That is to say, the method and the device can audit the picture based on the model of the hierarchical structure, and take the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, reasonably used models of the audit risk can be screened out, so that the number of the models of the audit risk can be reduced, and the audit time is further reduced. In the existing image auditing method, one image needs to pass through a plurality of risk models in sequence, and then the calculation results of all the risk models are summarized and audited to obtain the final risk identification result. Because the technical means of auditing the pictures by using the model based on the hierarchical structure is adopted, the technical problems that the auditing efficiency and the accuracy rate are often not obtained at the same time, so that the auditing efficiency is reduced, and the technical problems that the confidence coefficient of an output problem label is not high or too high due to insufficient model accuracy, so that the leakage of a risk picture or the mistaken killing of a risk-free picture is caused, and the risk is caused to the online service are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE III
Fig. 3 is a third flow diagram of a picture auditing method provided in the embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 3, the picture auditing method may include the following steps:
s301, inputting the picture to be audited into each general submodel in the general audit model to obtain the audit result output by each general submodel; wherein the universal audit model comprises at least one universal submodel.
S302, determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model.
S303, inputting the picture to be audited into each risk submodel of the at least one risk submodel to obtain an audit result output by each risk submodel.
S304, if at least two audit results in the audit results output by each risk sub-model meet the preset combination condition, combining the at least two audit results in the audit results output by each risk sub-model to obtain the combined audit result.
In this step, if at least two audit results in the audit results output by each risk sub-model satisfy the preset merging condition, the electronic device may merge at least two audit results in the audit results output by each risk sub-model to obtain the merged audit result. For example, "there is a low colloquial risk" is equivalent to "there is a low colloquial risk", "there is no sensitive content risk", and "there is no literal risk", so the present application may combine "there is a low colloquial risk", "there is no sensitive content risk", and "there is no literal risk" into "there is a low colloquial risk", which is used as the final audit result of the picture to be audited.
S305, determining a final auditing result of the picture to be audited based on the merged auditing result.
Fig. 4 is a schematic structural diagram of a picture auditing framework provided in an embodiment of the present application. As shown in FIG. 4, the audit framework may include: the system comprises a picture input unit, a general model unit, a risk auditing unit and a risk judging unit; wherein, 1) the picture input unit: acquiring pictures from local picture stock or a network environment according to the requirement of an auditing system; 2) a general model unit: summarizing the characteristics of all risk types to obtain bottom-layer labels helpful for auditing; identification is performed at this unit; 3) a risk auditing unit: filtering out risk auditing models which do not need to be used according to the result of the general model unit, and enabling subsequent pictures not to pass through the models; the remaining risk auditing model audits the picture in the unit; 4) a risk judgment unit: and performing risk identification according to results of each risk auditing model in the risk auditing unit.
The auditing process provided by the application uses a hierarchical structure, the bottom layer is a universal auditing model, and the high layer is a risk auditing model; by the aid of the pre-judgment of the general auditing model, the number of the subsequently used risk auditing models is reduced, and accordingly auditing time is reduced under the condition that accuracy is guaranteed. In addition, the results of the generic audit model in this application also act on the risk determination unit. Due to the problems caused by the labels and the confidence degrees output by the risk auditing model, the result of the general auditing model can be used in the risk judgment process, so that the auditing result can be subjected to auxiliary judgment, and the auditing accuracy is improved.
According to the technical scheme, the risk auditing process is expanded according to the hierarchical structure, the number of the used risk auditing models during auditing can be reduced based on the hierarchical structure, and therefore auditing efficiency is improved. Based on the auditing process of the hierarchical structure, the output results of the models of all levels are commonly used in the risk judgment stage, so that the auditing accuracy can be indirectly improved.
According to the picture auditing method provided by the embodiment of the application, a picture to be audited is input into each general submodel in a general auditing model, and auditing results output by each general submodel are obtained; then determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; inputting the picture to be audited into each risk submodel in the at least one risk submodel to obtain the audition result output by each risk submodel; and finally, determining the final audit result of the picture to be audited based on the audit result output by each risk sub-model. That is to say, the method and the device can audit the picture based on the model of the hierarchical structure, and take the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, reasonably used models of the audit risk can be screened out, so that the number of the models of the audit risk can be reduced, and the audit time is further reduced. In the existing image auditing method, one image needs to pass through a plurality of risk models in sequence, and then the calculation results of all the risk models are summarized and audited to obtain the final risk identification result. Because the technical means of auditing the pictures by using the model based on the hierarchical structure is adopted, the technical problems that the auditing efficiency and the accuracy rate are often not obtained at the same time, so that the auditing efficiency is reduced, and the technical problems that the confidence coefficient of an output problem label is not high or too high due to insufficient model accuracy, so that the leakage of a risk picture or the mistaken killing of a risk-free picture is caused, and the risk is caused to the online service are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example four
Fig. 5 is a schematic structural diagram of a picture auditing apparatus according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 includes: a general auditing module 501, a determination model 502, a risk auditing module 503 and a judging module 504; wherein the content of the first and second substances,
the general auditing module 501 is configured to input a picture to be audited into each general sub-model in the general auditing model, and obtain an auditing result output by each general sub-model; wherein the universal audit model comprises at least one universal submodel;
the determining module 502 is configured to determine, according to an audit result output by each general sub-model, at least one risk sub-model corresponding to the picture to be audited in a risk audit model; wherein the risk auditing model comprises at least one risk sub-model;
the risk auditing module 503 is configured to input the picture to be audited into each risk sub-model of the at least one risk sub-model, so as to obtain an auditing result output by each risk sub-model;
the determining module 504 is configured to determine a final audit result of the picture to be audited based on the audit result output by each risk sub-model.
Further, the general audit model includes, but is not limited to, at least one of the following general submodels: cartoon/real submodel, person detection submodel, human body attribute submodel and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: a cartoon vulgar identification submodel, a real people vulgar identification submodel, a picture sensitive marker submodel, a picture plain text submodel, a false exaggeration submodel, a sensitive character identification submodel, a character vulgar identification submodel.
Further, the determining module 502 is specifically configured to determine, if at least one of the general auditing sub-models detects that the to-be-audited picture has the personal information, that the risk sub-model corresponding to the to-be-audited picture in the risk auditing model is a model associated with the personal information.
Further, the determining module 502 is specifically configured to determine, if at least one of the general review submodels detects that text information exists in the image to be reviewed, that a risk submodel corresponding to the image to be reviewed in the risk review model is a model associated with the text information.
Further, the determining module 502 is specifically configured to determine, if at least one of the general auditing sub-models detects that the picture to be audited is a cartoon picture or a real picture, that a risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the cartoon picture or a model associated with the real picture.
Further, the risk auditing module 503 is further configured to perform fine adjustment on at least one risk sub-model corresponding to the picture to be audited in the risk auditing model according to the auditing result output by each general sub-model, so as to obtain at least one fine-adjusted risk sub-model corresponding to the picture to be audited in the risk auditing model; and executing the operation of inputting the picture to be audited into each risk sub-model of the at least one risk sub-model.
Further, the determining module 504 is specifically configured to, if at least two audit results exist in the audit results output by each risk sub-model and meet a preset combination condition, combine the at least two audit results in the audit results output by each risk sub-model to obtain a combined audit result; and determining the final audit result of the picture to be audited based on the merged audit result.
The picture auditing device can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details that are not described in detail in this embodiment can be referred to a picture auditing method provided in any embodiment of the present application.
EXAMPLE five
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a picture review method. For example, in some embodiments, the picture review method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the picture auditing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the picture review method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A picture auditing method, the method comprising:
inputting the picture to be audited into each general submodel in the general audit model to obtain the audit result output by each general submodel; wherein the universal audit model comprises at least one universal submodel;
determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model;
inputting the picture to be audited into each risk submodel of the at least one risk submodel to obtain an audit result output by each risk submodel;
and determining the final audit result of the picture to be audited based on the audit result output by each risk sub-model.
2. The method of claim 1, the generic audit model including, but not limited to, at least one of the following generic submodels: cartoon/real submodel, person detection submodel, human body attribute submodel and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: a cartoon vulgar identification submodel, a real people vulgar identification submodel, a picture sensitive marker submodel, a picture plain text submodel, a false exaggeration submodel, a sensitive character identification submodel, a character vulgar identification submodel.
3. The method according to claim 1, wherein the determining, according to the review result output by each general sub-model, at least one risk sub-model corresponding to the picture to be reviewed in a risk review model includes:
and if at least one general sub-model in the general auditing sub-models detects that the figure information exists in the picture to be audited, determining that the risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the figure information.
4. The method according to claim 1, wherein the determining, according to the review result output by each general sub-model, at least one risk sub-model corresponding to the picture to be reviewed in a risk review model includes:
and if at least one general sub-model in the general auditing sub-models detects that the image to be audited has character information, determining the risk sub-model corresponding to the image to be audited in the risk auditing model as a model associated with the character information.
5. The method according to claim 1, wherein the determining, according to the review result output by each general sub-model, at least one risk sub-model corresponding to the picture to be reviewed in a risk review model includes:
and if at least one general sub-model in the general auditing sub-models detects that the picture to be audited is a cartoon picture or a real picture, determining that the risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the cartoon picture or a model associated with the real picture.
6. The method of claim 1, prior to said inputting the picture to be reviewed into each of the at least one risk sub-model, the method further comprising:
according to the auditing result output by each general sub-model, finely adjusting at least one risk sub-model corresponding to the picture to be audited in the risk auditing model to obtain at least one finely adjusted risk sub-model corresponding to the picture to be audited in the risk auditing model; and executing the operation of inputting the picture to be audited into each risk sub-model of the at least one risk sub-model.
7. The method according to claim 1, wherein the determining of the final review result of the picture to be reviewed based on the review result output by each risk sub-model comprises:
if at least two audit results in the audit results output by each risk sub-model meet the preset combination condition, combining the at least two audit results in the audit results output by each risk sub-model to obtain the combined audit result;
and determining the final audit result of the picture to be audited based on the merged audit result.
8. A picture auditing apparatus, the apparatus comprising: the system comprises a general auditing module, a determination model, a risk auditing module and a judging module; wherein the content of the first and second substances,
the universal auditing module is used for inputting the picture to be audited into each universal submodel in the universal auditing model to obtain the auditing result output by each universal submodel; wherein the universal audit model comprises at least one universal submodel;
the determining module is used for determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk auditing model comprises at least one risk sub-model;
the risk auditing module is used for inputting the picture to be audited into each risk submodel of the at least one risk submodel to obtain auditing results output by each risk submodel;
and the judging module is used for judging the final auditing result of the picture to be audited based on the auditing result output by each risk sub-model.
9. The apparatus of claim 8, the generic audit model including but not limited to at least one of the following generic submodels: cartoon/real submodel, person detection submodel, human body attribute submodel and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: a cartoon vulgar identification submodel, a real people vulgar identification submodel, a picture sensitive marker submodel, a picture plain text submodel, a false exaggeration submodel, a sensitive character identification submodel, a character vulgar identification submodel.
10. The apparatus according to claim 8, wherein the determining module is specifically configured to determine, if at least one of the general auditing sub-models detects that the image to be audited has the personal information, that a risk sub-model corresponding to the image to be audited in the risk auditing model is a model associated with the personal information.
11. The apparatus according to claim 8, wherein the determining module is specifically configured to determine, if at least one of the general review submodels detects that text information exists in the image to be reviewed, that a risk submodel corresponding to the image to be reviewed in the risk review model is a model associated with the text information.
12. The apparatus according to claim 8, wherein the determining module is specifically configured to determine, if at least one of the general auditing sub-models detects that the picture to be audited is a cartoon picture or a real picture, that a risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the cartoon picture or a model associated with the real picture.
13. The device of claim 8, wherein the risk auditing module is further configured to perform fine adjustment on at least one risk sub-model corresponding to the picture to be audited in the risk auditing model according to the auditing result output by each general sub-model, so as to obtain at least one fine-adjusted risk sub-model corresponding to the picture to be audited in the risk auditing model; and executing the operation of inputting the picture to be audited into each risk sub-model of the at least one risk sub-model.
14. The apparatus according to claim 8, wherein the determination module is specifically configured to, if at least two audit results exist in the audit results output by each risk sub-model and satisfy a preset merging condition, merge the at least two audit results in the audit results output by each risk sub-model to obtain a merged audit result; and determining the final audit result of the picture to be audited based on the merged audit result.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110287468.3A 2021-03-17 2021-03-17 Picture auditing method and device, electronic equipment and storage medium Active CN113010721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287468.3A CN113010721B (en) 2021-03-17 2021-03-17 Picture auditing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287468.3A CN113010721B (en) 2021-03-17 2021-03-17 Picture auditing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113010721A true CN113010721A (en) 2021-06-22
CN113010721B CN113010721B (en) 2023-10-10

Family

ID=76409383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287468.3A Active CN113010721B (en) 2021-03-17 2021-03-17 Picture auditing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113010721B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881615A (en) * 2022-06-06 2022-08-09 山东浪潮爱购云链信息科技有限公司 Forum data interaction method and device in purchasing platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170168676A1 (en) * 2015-12-10 2017-06-15 International Business Machines Corporation Auditing icons via image recognition to provide individualized assets to software project teams
US20180365687A1 (en) * 2013-06-30 2018-12-20 EMC IP Holding Company LLC Fraud detection
CN109408669A (en) * 2018-09-29 2019-03-01 北京数美时代科技有限公司 A kind of content auditing method and device for different application scene
CN110766033A (en) * 2019-05-21 2020-02-07 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111090776A (en) * 2019-12-20 2020-05-01 广州市百果园信息技术有限公司 Video auditing method, device, auditing server and storage medium
CN111382291A (en) * 2020-03-12 2020-07-07 北京金山云网络技术有限公司 Machine auditing method and device and machine auditing server
CN111385602A (en) * 2018-12-29 2020-07-07 广州市百果园信息技术有限公司 Video auditing method, medium and computer equipment based on multi-level and multi-model
CN111967493A (en) * 2020-06-29 2020-11-20 北京百度网讯科技有限公司 Image auditing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365687A1 (en) * 2013-06-30 2018-12-20 EMC IP Holding Company LLC Fraud detection
US20170168676A1 (en) * 2015-12-10 2017-06-15 International Business Machines Corporation Auditing icons via image recognition to provide individualized assets to software project teams
CN109408669A (en) * 2018-09-29 2019-03-01 北京数美时代科技有限公司 A kind of content auditing method and device for different application scene
CN111385602A (en) * 2018-12-29 2020-07-07 广州市百果园信息技术有限公司 Video auditing method, medium and computer equipment based on multi-level and multi-model
CN110766033A (en) * 2019-05-21 2020-02-07 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111090776A (en) * 2019-12-20 2020-05-01 广州市百果园信息技术有限公司 Video auditing method, device, auditing server and storage medium
CN111382291A (en) * 2020-03-12 2020-07-07 北京金山云网络技术有限公司 Machine auditing method and device and machine auditing server
CN111967493A (en) * 2020-06-29 2020-11-20 北京百度网讯科技有限公司 Image auditing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张建军;孙滔;孟方;: "通过人工智能实现内容智能审核及在世界杯的实战", 现代电视技术, no. 08, pages 45 - 47 *
曾强;: "人工智能在交通违法证据审核中的应用", 信息记录材料, no. 04, pages 128 - 129 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881615A (en) * 2022-06-06 2022-08-09 山东浪潮爱购云链信息科技有限公司 Forum data interaction method and device in purchasing platform

Also Published As

Publication number Publication date
CN113010721B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
WO2020207167A1 (en) Text classification method, apparatus and device, and computer-readable storage medium
US20210357710A1 (en) Text recognition method and device, and electronic device
CN113780098B (en) Character recognition method, character recognition device, electronic equipment and storage medium
CN113657395B (en) Text recognition method, training method and device for visual feature extraction model
US20230096921A1 (en) Image recognition method and apparatus, electronic device and readable storage medium
CN113378855A (en) Method for processing multitask, related device and computer program product
CN112784760A (en) Human behavior recognition method, device, equipment and storage medium
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN112989235A (en) Knowledge base-based internal link construction method, device, equipment and storage medium
CN113869253A (en) Living body detection method, living body training device, electronic apparatus, and medium
CN113010721B (en) Picture auditing method and device, electronic equipment and storage medium
CN117743577A (en) Text classification method, device, electronic equipment and storage medium
CN113806522A (en) Abstract generation method, device, equipment and storage medium
CN113408280A (en) Negative example construction method, device, equipment and storage medium
CN113657248A (en) Training method and device for face recognition model and computer program product
CN114415997B (en) Display parameter setting method and device, electronic equipment and storage medium
CN114924959A (en) Page testing method and device, electronic equipment and medium
CN114973333A (en) Human interaction detection method, human interaction detection device, human interaction detection equipment and storage medium
CN114417862A (en) Text matching method, and training method and device of text matching model
CN113901901A (en) Training method and device of image processing model, electronic equipment and medium
CN113238765B (en) Method, device, equipment and storage medium for distributing small program
CN112906650B (en) Intelligent processing method, device, equipment and storage medium for teaching video
CN114898374A (en) Image semantic recognition method, device, equipment and storage medium
CN112988688A (en) Picture sharing method and device, electronic equipment and storage medium
CN116052188A (en) Form detection method, form detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant