CN116578763B - Multisource information exhibition system based on generated AI cognitive model - Google Patents

Multisource information exhibition system based on generated AI cognitive model Download PDF

Info

Publication number
CN116578763B
CN116578763B CN202310840885.5A CN202310840885A CN116578763B CN 116578763 B CN116578763 B CN 116578763B CN 202310840885 A CN202310840885 A CN 202310840885A CN 116578763 B CN116578763 B CN 116578763B
Authority
CN
China
Prior art keywords
value
module
data
text
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310840885.5A
Other languages
Chinese (zh)
Other versions
CN116578763A (en
Inventor
宋小波
张军
梁萌萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuojin Information Technology Changzhou Co ltd
Original Assignee
Zhuojin Information Technology Changzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuojin Information Technology Changzhou Co ltd filed Critical Zhuojin Information Technology Changzhou Co ltd
Priority to CN202310840885.5A priority Critical patent/CN116578763B/en
Publication of CN116578763A publication Critical patent/CN116578763A/en
Application granted granted Critical
Publication of CN116578763B publication Critical patent/CN116578763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a multisource information exhibition system based on a generated AI cognitive model, which relates to the technical field of exhibition systems, wherein the exhibition system can be applied to public places such as museums, exhibition halls, scenic spots and the like and comprises a data grabbing module, an intelligent recommending module, a screening module, a judging module, a natural language analyzing module, an image analyzing module, a storage module and an exhibition module: based on multisource data information queried by a user, the intelligent recommendation module performs intelligent search and recommendation by using a generated AI model, generates related information or recommended content according to interests and contexts of the user, helps the user to quickly find the interested information, screens out unqualified text data and image data by a screening module, and then establishes an evaluation coefficient based on the data screening information. According to the invention, whether the data need to be displayed by the display module is judged according to the comparison result of the evaluation coefficient and the evaluation threshold value, so that the waste of display resources is avoided, and the display cost of the data is reduced.

Description

Multisource information exhibition system based on generated AI cognitive model
Technical Field
The invention relates to the technical field of exhibition systems, in particular to a multisource information exhibition system based on a generated AI cognitive model.
Background
The multisource information exhibition system is a system for integrating and exhibiting multisource information from different sources and different fields by utilizing advanced information technology and Internet technology, along with the rapid development of the information technology and popularization of the Internet, people can easily acquire various information from different channels, including contents in various forms such as characters, pictures, audio and the like, however, the explosive growth of the information also brings about the problem of information overload, and people become more and more difficult to find needed information in mass information;
the multi-source information exhibition system aims to help a user to obtain required information more efficiently by integrating and exhibiting multi-source information and provides a convenient information browsing and querying tool.
The prior art has the following defects:
the existing exhibition system can not generate related information or recommended content according to the interests and the context of the user, so that the interested information can not be effectively pushed to the user, and the intelligent degree is low;
the exhibition system usually performs visualization processing on all acquired data, however, in practical application, there are some unimportant data which do not need to be exhibited, and the exhibition of such data not only occupies exhibition resources, but also increases the cost of data processing.
Disclosure of Invention
The invention aims to provide a multisource information exhibition system based on a generated AI cognitive model so as to solve the defects in the background technology.
In order to achieve the above object, the present invention provides the following technical solutions: the exhibition system can be applied to entertainment equipment, museums and the like in scenic spots and comprises a data grabbing module, an intelligent recommending module, a screening module, a judging module, a natural language analyzing module, an image analyzing module, a storage module and an exhibition module:
and the data grabbing module is used for: acquiring data from a plurality of information sources;
and the intelligent recommendation module: based on multisource data information queried by a user, intelligent searching and recommending are performed by using a generated AI model;
and (5) screening out a module: after text data and image data are acquired, the text data and the image data which do not reach standards are screened out;
and a judging module: establishing an evaluation coefficient based on the data screening information, and judging whether the data need to be displayed by the display module or not according to the comparison result of the evaluation coefficient and the evaluation threshold;
and a natural language analysis module: analyzing text data in the multi-source data;
and an image analysis module: analyzing image data in the multi-source data;
And a storage module: uploading the parsed text data and the parsed image data to a cloud platform for storage;
and an exhibition module: and displaying the screened residual text data and the screened image data to a user.
In a preferred embodiment, the screening module obtains a grammar error rate, a word vector similarity, a structural integrity, and a text repetition of the text data, and obtains normalized values of the grammar error rate, the word vector similarity, the structural integrity, and the text repetition.
In a preferred embodiment, the normalized value of syntax error rate acquisition logic is: when the grammar error rate is larger than the error threshold, the normalized value of the grammar error rate is 1, and when the grammar error rate is smaller than or equal to the error threshold, the normalized value of the grammar error rate is 0;
the normalized value acquisition logic of the word vector similarity is as follows: when the word vector similarity is greater than the similarity threshold, the normalized value of the word vector similarity is 1, and when the word vector similarity is less than or equal to the similarity threshold, the normalized value of the word vector similarity is 0;
the normalized value acquisition logic for the structural integrity is: when the structural integrity is greater than or equal to the integrity threshold, the normalized value of the structural integrity is 1, and when the structural integrity is less than the integrity threshold, the normalized value of the structural integrity is 0;
The normalized value acquisition logic of the text repetition degree is as follows: when the text repetition degree is larger than the repetition threshold, the normalized value of the text repetition degree is 1, and when the text repetition degree is smaller than or equal to the repetition threshold, the normalized value of the text repetition degree is 0.
In a preferred embodiment, the grammar error rate = grammar error number/total word number, the error number is the number of grammar errors detected in the text, and the total word number is the total word number in the text;
the word vector similarity calculation expression is as follows:the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For word vector similarity, ++>Represents the dot product of vector A and vector B, < >>And->The euclidean distance for vector a and vector B, respectively, euclidean distance = sqrt ((A1-B1) 2 +(A2-B2) 2 +...+(An-Bn) 2 ) Wherein A1, A2,) An and B1, B2,) Bn represent the values of the corresponding dimensions in vector a and vector B, respectively;
the structural integrity = (number of texts containing event information/total number of texts) (number of texts containing time information/total number of texts) (number of texts containing location information/total number of texts), the number of texts containing event information representing the number of texts containing event information, and the total number of texts representing the total number of texts;
the text repetition = number of repeated words or phrases/total number of words, wherein the number of repeated words or phrases is the number of repeated words or phrases detected in the text and the total number of words is the total number of words in the text.
In a preferred embodiment, the screening module calculates the grammar error rate normalized value, the word vector similarity normalized value, the structural integrity normalized value, and the text repetition normalized value to obtain a text screening value, where the calculation expression is:
in the method, in the process of the invention,for text screen value +_>Normalized values for structural integrity, < >>Normalize the values for syntax error rate,/">Normalizing the value for word vector similarity, +.>Normalizing the numerical value for the text repetition degree;
when text screening value of text dataWhen the text data is not less than 1, the screening module does not screen the text data, and when the text screening value of the text data is +.>When < 1, the screening module does not screen the text data.
In a preferred embodiment, the screening module obtains a sharpness index, contrast, noise level, and exposure in the image data, and obtains normalized values of the sharpness index, contrast, noise level, and exposure.
In a preferred embodiment, the normalized value acquisition logic of the sharpness index is: when the definition index is larger than or equal to the definition threshold, the normalized value of the definition index is 1, and when the definition index is smaller than the definition threshold, the normalized value of the definition index is 0;
The normalized value acquisition logic of the contrast ratio is as follows: when the contrast is not in the contrast threshold range, the normalized value of the contrast is 1, and when the contrast is in the contrast threshold range, the normalized value of the contrast is 0;
the normalized value acquisition logic of the noise level is: when the noise level is greater than the noise threshold, the normalized value of the noise level is 1, and when the noise level is less than or equal to the noise threshold, the normalized value of the noise level is 0;
the normalized value acquisition logic of the exposure is: the normalized value of the exposure is 1 when the exposure is not in the exposure threshold range, and 0 when the exposure is in the exposure threshold range.
In a preferred embodiment, the sharpness index is calculated by the following expression: ci=10×log10 ((max_gradient) 2 ) MSE), where CI is the sharpness index, max_gradient represents the maximum gradient value in the image, MSE represents the mean square error, MSE= (1/N) = (I1 (I, j) -I2 (I, j) 2 ]Wherein I1 (I, j) represents the pixel value of the original image, I2 (I, j) represents the pixel value of the processed image, (I, j) represents the coordinates of the pixels, N represents the total number of pixels, Σ represents the summation of all pixels;
The calculation expression of the contrast is as follows: contrast= (Max-Min)/(max+min), wherein Contrast is Contrast, max represents the maximum pixel value of the image, min represents the minimum pixel value of the image;
the noise level is calculated as: noise=sqrt (mean ((XS-mean (XS)) 2 ) Where Noise is the Noise level, XS represents the pixel value of the image, mean represents the average of the pixel values;
the calculated expression of the exposure is: exposure= (1/M) ΣZI, where Exposure represents Exposure, M represents the total number of pixels in the image, and ΣZI represents the sum of the luminance values of all pixels in the image.
In a preferred embodiment, the screening module calculates the sharpness index normalization value, the contrast normalization value, the noise level normalization value, and the exposure level normalization value to obtain an image screening value, where the calculation expression is:
in the method, in the process of the invention,value for image screening ∈ ->Normalized value for contrast +.>The values are normalized for the noise level and,normalizing the value for exposure, +.>Normalizing the value for the sharpness index;
when the image data is the image screening valueThe screening module screens out the image data when > 1, and the image screening value of the image data +. >And when the image data is less than or equal to 1, the screening module does not screen the image data.
In a preferred embodiment, the judging module performs comprehensive calculation on the text data and the image data to establish an evaluation coefficientThe expression is calculated:
in the method, in the process of the invention,for text screen value +_>Value for image screening ∈ ->、/>The scaling coefficients of the text and image screening values, respectively, and +.>The values are normalized for the structural integrity,normalize the values for syntax error rate,/">Normalizing the value for word vector similarity, +.>Normalize values for text repetition, +.>Normalized value for contrast +.>Normalize the values for noise level, +.>The values are normalized for the exposure level,normalizing the value for the sharpness index;
after the evaluation coefficient is obtained, comparing the evaluation coefficient with an evaluation threshold, and if the evaluation coefficient is greater than or equal to the evaluation threshold, judging that the multisource data of the same batch need to be displayed by the display module by the judging module; if the evaluation coefficient is smaller than the evaluation threshold, the judging module judges that the multisource data of the same batch does not need to be displayed through the display module.
In the technical scheme, the invention has the technical effects and advantages that:
1. according to the invention, based on multisource data information queried by a user, the intelligent recommendation module is used for carrying out intelligent search and recommendation, related information or recommended content is generated according to interests and contexts of the user, the user is helped to quickly find out the interested information, text data and image data which do not reach standards are screened out by the screening module, then an evaluation coefficient is established based on the data screening information, and whether the data need to be displayed by the display module is judged according to a comparison result of the evaluation coefficient and an evaluation threshold value, so that the waste of display resources is avoided, and the display cost of the data is reduced;
2. According to the application, the text data and the image data are respectively and independently judged through the screening module, so that the text data or the image data which does not reach the standard are screened out in advance, the processing amount of the data is reduced, and the processing efficiency of the exhibition system on the data is effectively improved;
3. according to the application, the text data and the image data are comprehensively calculated through the judging module, the evaluation coefficient is established, so that the multi-source data are comprehensively analyzed, the evaluation coefficient is compared with the evaluation threshold after being obtained, and if the evaluation coefficient is greater than or equal to the evaluation threshold, the judging module judges that the multi-source data in the same batch need to be displayed through the display module; if the evaluation coefficient is smaller than the evaluation threshold, the judging module judges that the multisource data of the same batch does not need to be displayed through the display module, so that waste of display resources is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a block diagram of a system according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, the multisource information exhibition system based on a generated AI cognitive model according to the present embodiment includes a data capturing module, an intelligent recommending module, a screening module, a judging module, a natural language analyzing module, an image analyzing module, a storage module and an exhibition module:
the data capture module is responsible for acquiring data from a plurality of information sources, the data capture can be realized through a web crawler technology, and the acquired multi-source data are respectively sent to the screening module and the intelligent recommendation module;
the data grabbing module acquires multi-source data through a web crawler technology, and the method comprises the following steps of:
1) Determining a target source: determining a plurality of data sources to be grabbed, wherein the data sources can be websites, API interfaces, databases and the like;
2) Analysis target source: analyzing each target source to know key information such as webpage structures, data formats, request modes and the like;
3) Determining a grabbing strategy: according to the characteristics and the requirements of a target source, determining a grabbing strategy comprising grabbing frequency, grabbing depth, grabbing range and the like;
4) Writing a crawler program: based on the selected crawling strategy, a crawler program is written, and a programming language (such as Python) and a related crawler framework (such as Scrayy) are used for realizing automatic crawling of data;
5) Initiating a request: initiating a request by using HTTP or other protocols according to a request mode of a target source to acquire a target page or data;
6) Resolving a page: the acquired page is parsed, required data is extracted, and an HTML parser (such as BeautiflulSoup) or regular expression and other technologies can be used for extracting target data.
The intelligent recommendation module performs intelligent search and recommendation by utilizing a generated AI model based on multi-source data information queried by a user, and the system can generate related information or recommended content according to the interests and the contexts of the user through the generation capacity of the model so as to help the user to quickly find the interesting information;
The intelligent recommendation module performs intelligent searching and recommendation by using the generated AI model based on multi-source data information queried by a user, and comprises the following steps:
1) Model training: training the preprocessed data using a generated AI model (e.g., recurrent neural network, transducer, etc.); the model can use a method of supervised learning or unsupervised learning to carry out architecture design and parameter adjustment of the model according to the requirements;
2) Context understanding: according to the context information (such as search keywords, browsing history, personal preferences, etc.) of the user, the context information is transmitted as input to the generated AI model; the model can understand the intention and the demand of the user by utilizing the context information, and provides basis for the subsequent recommendation generation;
3) Information generation and recommendation: based on the interests and the context of the user, the generation type AI model can use the generation capacity to generate related information or recommended content; this may include generating related text descriptions, recommending related articles, products or services, etc.;
4) Result filtering and ordering: filtering and sequencing the generated information or recommended content, and screening out the content which is most relevant and most in line with the interests of the user according to a certain rule and algorithm; optimization and ranking of results using correlation scoring, ranking algorithms (e.g., TF-IDF, pageRank, etc.), etc. may be considered;
5) Results show that: displaying the filtered and ordered recommended content to a user, wherein the recommended content can be presented in the forms of a list, a card, a recommendation column and the like; meanwhile, user interaction modes such as clicking, collecting, feeding back and the like can be provided, so that recommendation results are further optimized;
6) Continuous learning and optimization: according to feedback and behavior data of a user, the generation type AI model is continuously updated and optimized, the recommendation accuracy and individuation degree are improved, and technologies such as incremental training, transfer learning and the like can be adopted to adapt to the change of user interests and new data modes.
After the screening module acquires text data and image data, the text data and the image data which do not reach the standard are screened, the rest text data and the image data are sent to a natural language analysis module and an image analysis module, and data screening information is sent to a judging module;
the judging module establishes an evaluation coefficient based on the data screening information, judges whether the data need to be displayed by the display module according to the comparison result of the evaluation coefficient and the evaluation threshold value, and sends a display instruction to the display module if the data need to be displayed by the display module;
the natural language analysis module analyzes text data in the multi-source data through a natural language processing technology, and comprises semantic analysis, named entity recognition, keyword extraction, text classification and other tasks, so that text content can be understood, important information can be extracted from the text content, subsequent searching, sorting, recommending and other operations can be facilitated, and the analyzed text data is sent to the exhibition module and the storage module;
The natural language analyzing module analyzes text data in the multi-source data through a natural language processing technology and comprises the following steps of;
1) Text preprocessing: preprocessing texts in the multi-source data, including removing special characters, punctuation marks, stop words and the like, and performing word segmentation processing to segment the texts into words or clauses;
2) Semantic analysis: semantic analysis is carried out on the text by utilizing a natural language processing technology, and the meaning and the context of the text are understood; this may include tasks such as word sense disambiguation, syntactic analysis, semantic role labeling, etc., to obtain a more accurate semantic representation;
3) Named entity identification: specific named entities in the text, such as a person name, a place name, an organization and the like, are identified through a named entity identification technology; this can help identify important information and key entities in the text, providing a basis for subsequent analysis and application;
4) Keyword extraction: extracting keywords or key phrases in the text, and reflecting the core content and the theme of the text; this can be done by word frequency statistics, TF-IDF, etc., identifying important words in the text;
5) Text classification: classifying the text according to predefined categories or topics; this may be done by machine learning algorithms, deep learning models, etc., dividing the text into different categories or labels to facilitate subsequent information organization and retrieval;
6) Emotion analysis: carrying out emotion analysis on the text, and judging emotion tendencies of the text, such as positive, negative, neutral and the like; this can be done through machine learning models, emotion dictionaries, etc., helping to understand emotion colors and emotion expressions of text;
7) Extracting entity relation: extracting the relation between entities in the text by natural language processing technology; the method can help understand the connection and effect between the entities in the text, and has important significance for information extraction and knowledge graph construction;
8) And (3) outputting results: and outputting the analyzed result, and presenting the result in a structural form, such as JSON, XML and the like, so as to facilitate subsequent data analysis and application.
The image analysis module analyzes image data in the multi-source data through an image processing technology, comprises tasks such as image recognition, object detection, scene understanding and the like, extracts useful features and information from the multimedia data, and sends the analyzed image data to the exhibition module and the storage module;
the image analysis module analyzes the image data in the multi-source data through an image processing technology and comprises the following steps:
1) Feature extraction: extracting features in the image data, such as texture features, color features, shape features, etc., which can be achieved by using image processing algorithms (such as edge detection, corner detection, etc.) or feature extraction algorithms (such as SIFT, SURF, etc.);
2) Image segmentation: dividing the image into different regions or objects for separate analysis and processing of each region or object, which may be accomplished using image segmentation algorithms (e.g., region-based segmentation, edge-based segmentation, etc.);
3) And (3) outputting results: and outputting the analysis and processing results, such as image annotation, object recognition results, images after image restoration and the like, so as to facilitate subsequent data visualization, report generation and application.
The storage module uploads the parsed text data and the parsed image data to the cloud platform for storage, and the prepared text data and the prepared image data are uploaded to the selected cloud platform, so that uploading operation can be performed by using an API or a tool provided by the cloud platform, and safe and stable data transmission to the cloud is ensured.
When the display module receives the display instruction, the display module displays the screened residual text data and the screened image data to the user in an intuitive mode by utilizing a visualization technology, such as a chart, a map, word clouds and the like, and information is presented, so that the user can more intuitively understand and explore the data;
the display module displays the screened residual text data and the screened image data to the user in an intuitive mode by utilizing a visualization technology, and comprises the following steps of:
1) And (3) data arrangement: the text data is arranged and formatted for display in the exhibition, which can include text typesetting, segmentation, title setting and the like;
2) Visual design: according to exhibition demands and user experience, proper visual layout and interaction modes are designed, and various visual elements such as charts, image display, text blocks, sliding display and the like can be considered to be used;
3) And (3) image display: displaying the screened residual image data to a user in an intuitive mode, and displaying images in a mode of image slides, gallery forms, thumbnail lists and the like to display the content and characteristics of the images;
4) Text display: displaying the screened residual text data to a user in an intuitive mode, and displaying text contents in modes of text blocks, scroll display, page navigation and the like to provide good reading experience;
5) And (3) data association: the text data and the image data are displayed in an associated mode, and the text and the corresponding image can be displayed through simultaneous display of the related text and the image on a display page or through clicking or hovering operation and the like;
6) Interaction function: the method has the advantages that the method provides the functions of interaction between the user and the exhibition content, such as image enlargement, image reduction, page turning browsing text and the like, so that the user can freely browse the exhibition content according to the interests and the demands of the user;
7) User navigation: the navigation function is provided, so that a user can conveniently browse different exhibition contents, such as page number navigation, directory navigation, search function and the like;
8) And (3) response type design: considering the display effect of the display module on different devices, the display content is ensured to adapt to different screen sizes and resolutions, and good user experience is provided.
According to the application, based on multi-source data information queried by a user, the intelligent recommendation module is used for carrying out intelligent search and recommendation by utilizing the generated AI model, related information or recommended content is generated according to interests and contexts of the user, the user is helped to quickly find out the interested information, text data and image data which do not reach standards are screened out by the screening module, then an evaluation coefficient is established based on the data screening information, and whether the data need to be displayed by the display module is judged according to a comparison result of the evaluation coefficient and an evaluation threshold value, so that the waste of display resources is avoided, and the display cost of the data is reduced.
Example 2: after the screening module acquires text data and image data, the text data and the image data which do not reach the standard are screened, the rest text data and the image data are sent to a natural language analysis module and an image analysis module, and data screening information is sent to a judging module;
The screening module screens the text data which does not reach the standard, and the method comprises the following steps:
the method comprises the steps of obtaining grammar error rate, word vector similarity, structural integrity and text repetition of text data, and respectively carrying out normalization processing on the grammar error rate, the word vector similarity, the structural integrity and the text repetition to obtain normalization values of the grammar error rate, the word vector similarity, the structural integrity and the text repetition;
the normalized value acquisition logic of the syntax error rate is: when the grammar error rate is larger than the error threshold, the normalized value of the grammar error rate is 1, and when the grammar error rate is smaller than or equal to the error threshold, the normalized value of the grammar error rate is 0;
the normalized value acquisition logic of the word vector similarity is as follows: when the word vector similarity is greater than the similarity threshold, the normalized value of the word vector similarity is 1, and when the word vector similarity is less than or equal to the similarity threshold, the normalized value of the word vector similarity is 0;
the normalized value acquisition logic for structural integrity is: when the structural integrity is greater than or equal to the integrity threshold, the normalized value of the structural integrity is 1, and when the structural integrity is less than the integrity threshold, the normalized value of the structural integrity is 0;
The normalized value acquisition logic of the text repetition degree is as follows: when the text repetition degree is larger than the repetition threshold, the normalized value of the text repetition degree is 1, and when the text repetition degree is smaller than or equal to the repetition threshold, the normalized value of the text repetition degree is 0;
grammar error rate = grammar error number/total word number, the number of errors is the number of grammar errors detected in the text, the total word number is the total word number in the text, when the grammar error rate is greater than a set error threshold, the text data is indicated to be not up to standard;
the word vector similarity calculation expression is:the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For the word vector similarity degree,represents the dot product of vector A and vector B, < >>And->Representing norms (euclidean distances) of the vector a and the vector B, respectively;
euclidean distance = sqrt ((A1-B1) 2 +(A2-B2) 2 +...+(An-Bn) 2 ) Wherein A1, A2,) An and B1, B2,) Bn represent the values of the corresponding dimensions in vector a and vector B, respectively;
the greater the word vector similarity, the higher the semantic similarity between the two words, specifically, when the word vector similarity of the two words is close to 1, it means that the two words are very similar in terms of meaning, context or usage, and there may be a high overlap in terms of meaning, context or usage, so that text data is blurred, and when the word vector similarity is greater than a similarity threshold, it is indicated that the text data does not reach the standard.
Structural integrity = (number of texts containing event information/total number of texts) (number of texts containing time information/total number of texts) (number of texts containing location information/total number of texts), wherein the number of texts containing event information represents the number of texts containing event information, and the total number of texts represents the total number of texts, and the product of the ratios is calculated to obtain structural integrity, the greater the structural integrity is, the better the integrity of the text data is, and when the structural integrity is greater than or equal to a integrity threshold value, the text data is up to standard.
Text repetition = number of repeated words or phrases/total number of words, wherein the number of repeated words or phrases is the number of repeated words or phrases detected in the text, the total number of words is the total number of words in the text, the greater the text repetition indicates a high repetition rate between text sentences, and when the text repetition is greater than a repetition threshold, the text data is indicated to reach standards.
The application comprehensively calculates the grammar error rate normalization value, the word vector similarity normalization value, the structural integrity normalization value and the text repetition normalization value to obtain a text screening value, wherein the calculation expression is as follows: The method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For text screen value +_>Is attributed to structural integrityNumerical value of->Normalize the values for syntax error rate,/">Normalizing the value for word vector similarity, +.>Normalizing the numerical value for the text repetition degree;
when text screening value of text dataWhen the text data is not less than 1, the screening module does not screen the text data, and when the text screening value of the text data is +.>When < 1, the screening module does not screen the text data.
The screening module screens out the image data which does not reach the standard, and the method comprises the following steps:
acquiring a definition index, a contrast ratio, a noise level and an exposure degree in image data, and carrying out normalization processing on the definition index, the contrast ratio, the noise level and the exposure degree to acquire normalized values of the definition index, the contrast ratio, the noise level and the exposure degree;
the normalized value acquisition logic of the sharpness index is: when the definition index is larger than or equal to the definition threshold, the normalized value of the definition index is 1, and when the definition index is smaller than the definition threshold, the normalized value of the definition index is 0;
the normalized value acquisition logic of contrast is: when the contrast is not in the contrast threshold range, the normalized value of the contrast is 1, and when the contrast is in the contrast threshold range, the normalized value of the contrast is 0;
The normalized value acquisition logic for the noise level is: when the noise level is greater than the noise threshold, the normalized value of the noise level is 1, and when the noise level is less than or equal to the noise threshold, the normalized value of the noise level is 0;
the normalized value acquisition logic for exposure is: when the exposure is not in the exposure threshold range, the normalized value of the exposure is 1, and when the exposure is in the exposure threshold range, the normalized value of the exposure is 0;
the calculated expression of the sharpness index is: ci=10×log10 ((max_gradient) 2 ) MSE), where CI is the sharpness index, max_gradient represents the maximum gradient value in the image, MSE represents the mean square error, MSE= (1/N) = (I1 (I, j) -I2 (I, j) 2 ]Wherein I1 (I, j) represents the pixel value of the original image, I2 (I, j) represents the pixel value of the processed image, (I, j) represents the coordinates of the pixels, N represents the total number of pixels, Σ represents the summation of all pixels;
the higher the sharpness index of the image data, which generally indicates the higher the sharpness of the image, the sharpness index is an index for evaluating the sharpness of the image, and measures the sharpness of the image based on the gradient information, particularly the edge information, of the image, and the higher the sharpness index means that the edge of the image is clearer, the more prominent the details are, the better the overall image quality is, and when the sharpness index is greater than or equal to the sharpness threshold, the image data reaches the standard.
The computational expression for contrast is: contrast= (Max-Min)/(max+min), wherein Contrast is Contrast, max represents the maximum pixel value of the image, min represents the minimum pixel value of the image;
when the contrast is not in the contrast threshold range, if the contrast is too large, the shadow part in the image is too dark, the highlight part is too bright, details are possibly lost, a part of information of the image is lost, if the contrast is too small, details in the image are blurred, clear boundaries and detail information of the image are lacked, and when the contrast is in the contrast threshold range, the image data reach the standard.
The calculated expression of the noise level is: noise=sqrt (mean ((XS-mean (XS)) 2 ) Where Noise is the Noise level, XS represents the pixel value of the image, mean represents the average of the pixel values, sqrt represents the square root;
the larger the noise level of the image data, the lower the quality of the image is usually indicated or the image is greatly disturbed, the noise refers to random or non-random interference signals in the image, when the noise level of the image data is larger, the image in the display system presents more noise points, granular noise or distortion phenomena, which can lead to the degradation of the definition of the image, the blurring of details, the influence on the image quality and the deterioration of ornamental effect, and when the noise level is larger than a noise threshold value, the image data does not reach the standard.
The calculated expression of the exposure is: exposure= (1/M) Σzi, where Exposure represents Exposure, M represents the total number of pixels in the image, Σzi represents the sum of the luminance values of all pixels in the image; when the exposure degree is not in the exposure threshold range, if the exposure degree is too large, the detail part in the image is overexposed, the detail information is lost, the image is flattened and lacks texture, the detail is lost in the highlight part in the image, the overexposure effect is presented, the detail is difficult to identify by a viewer and different brightness levels are distinguished, if the exposure degree is too small, the detail is lost in the dark part in the image, the detail and the definition of the image are lacking, the understanding and appreciation of the image content are influenced by the viewer, and when the exposure degree is in the exposure threshold range, the image data reach the standard.
The application comprehensively calculates the definition index normalization value, the contrast normalization value, the noise level normalization value and the exposure degree normalization value to obtain an image screening value, wherein the calculation expression is as follows:the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->Value for image screening ∈ ->Normalized value for contrast +.>Normalize the values for noise level, +.>Normalizing the value for exposure, +.>Normalizing the value for the sharpness index;
When the image data is the image screening valueThe screening module screens out the image data when > 1, and the image screening value of the image data +.>And when the image data is less than or equal to 1, the screening module does not screen the image data.
According to the application, the text data and the image data are respectively and independently judged through the screening module, so that the text data or the image data which does not reach the standard are screened out in advance, the processing amount of the data is reduced, and the processing efficiency of the exhibition system on the data is effectively improved.
Example 3: the judging module establishes an evaluation coefficient based on the data screening information, judges whether the data need to be displayed by the display module according to the comparison result of the evaluation coefficient and the evaluation threshold value, and sends a display instruction to the display module if the data need to be displayed by the display module;
the judging module establishes an evaluation coefficient based on the data screening information and comprises the following steps:
the judging module carries out comprehensive calculation on the text data and the image data to establish an evaluation coefficientThe expression is calculated: />The method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For text screen value +_>Value for image screening ∈ ->、/>The scaling coefficients of the text and image screening values, respectively, and +.>Normalized values for structural integrity, < >>Normalize the values for syntax error rate,/" >Normalizing the value for word vector similarity, +.>Normalize values for text repetition, +.>Normalized value for contrast +.>Normalize the values for noise level, +.>Normalizing the value for exposure, +.>Values were normalized for sharpness index.
After the evaluation coefficient is obtained, comparing the evaluation coefficient with an evaluation threshold, and if the evaluation coefficient is greater than or equal to the evaluation threshold, judging that the multisource data of the same batch need to be displayed by the display module by the judging module; if the evaluation coefficient is smaller than the evaluation threshold, the judging module judges that the multisource data of the same batch does not need to be displayed through the display module.
According to the application, the text data and the image data are comprehensively calculated through the judging module, the evaluation coefficient is established, so that the multi-source data are comprehensively analyzed, the evaluation coefficient is compared with the evaluation threshold after being obtained, and if the evaluation coefficient is greater than or equal to the evaluation threshold, the judging module judges that the multi-source data in the same batch need to be displayed through the display module; if the evaluation coefficient is smaller than the evaluation threshold, the judging module judges that the multisource data of the same batch does not need to be displayed through the display module, so that waste of display resources is avoided.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (9)

1. Multisource information exhibition system based on generation type AI cognitive model, its characterized in that: the system comprises a data grabbing module, an intelligent recommending module, a screening module, a judging module, a natural language analyzing module, an image analyzing module, a storage module and an exhibition module:
And the data grabbing module is used for: acquiring data from a plurality of information sources;
and the intelligent recommendation module: based on multisource data information queried by a user, intelligent searching and recommending are performed by using a generated AI model;
and (5) screening out a module: after text data and image data are acquired, the text data and the image data which do not reach standards are screened out;
and a judging module: establishing an evaluation coefficient based on the data screening information, and judging whether the data need to be displayed by the display module or not according to the comparison result of the evaluation coefficient and the evaluation threshold;
and a natural language analysis module: analyzing text data in the multi-source data;
and an image analysis module: analyzing image data in the multi-source data;
and a storage module: uploading the parsed text data and the parsed image data to a cloud platform for storage;
and an exhibition module: displaying the screened residual text data and image data to a user;
the judging module carries out comprehensive calculation on the text data and the image data to establish an evaluation coefficientThe expression is calculated:
in the method, in the process of the invention,for text screen value +_>Value for image screening ∈ ->The scaling coefficients of the text and image screening values, respectively, and +.>Normalized values for structural integrity, < >>Normalize the values for syntax error rate,/" >Normalizing the value for word vector similarity, +.>The values are normalized for the degree of text repetition,normalized value for contrast +.>Normalize the values for noise level, +.>Normalizing the value for exposure, +.>Normalizing the value for the sharpness index;
after the evaluation coefficient is obtained, comparing the evaluation coefficient with an evaluation threshold, and if the evaluation coefficient is greater than or equal to the evaluation threshold, judging that the multisource data of the same batch need to be displayed by the display module by the judging module; if the evaluation coefficient is smaller than the evaluation threshold, the judging module judges that the multisource data of the same batch does not need to be displayed through the display module.
2. The generated AI-cognitive model-based multi-source information display system of claim 1, wherein: the screening module acquires the grammar error rate, the word vector similarity, the structural integrity and the text repetition of the text data, and acquires the normalized values of the grammar error rate, the word vector similarity, the structural integrity and the text repetition.
3. The generated AI-cognitive model-based multi-source information display system of claim 2, wherein: the normalized value acquisition logic of the grammar error rate is as follows: when the grammar error rate is larger than the error threshold, the normalized value of the grammar error rate is 1, and when the grammar error rate is smaller than or equal to the error threshold, the normalized value of the grammar error rate is 0;
The normalized value acquisition logic of the word vector similarity is as follows: when the word vector similarity is greater than the similarity threshold, the normalized value of the word vector similarity is 1, and when the word vector similarity is less than or equal to the similarity threshold, the normalized value of the word vector similarity is 0;
the normalized value acquisition logic for the structural integrity is: when the structural integrity is greater than or equal to the integrity threshold, the normalized value of the structural integrity is 1, and when the structural integrity is less than the integrity threshold, the normalized value of the structural integrity is 0;
the normalized value acquisition logic of the text repetition degree is as follows: when the text repetition degree is larger than the repetition threshold, the normalized value of the text repetition degree is 1, and when the text repetition degree is smaller than or equal to the repetition threshold, the normalized value of the text repetition degree is 0.
4. The generated AI-cognitive model-based multi-source information display system of claim 3, wherein: the grammar error rate=grammar error number/total word number, the error number is the number of grammar errors detected in the text, and the total word number is the total word number in the text;
the word vector similarity calculation expression is as follows:the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->For the word vector similarity degree, Represents the dot product of vector A and vector B, < >>And->The euclidean distance for vector a and vector B, respectively, euclidean distance = sqrt ((A1-B1) 2 +(A2-B2) 2 +...+(An-Bn) 2 ) Wherein A1, A2,) An and B1, B2,) Bn represent the values of the corresponding dimensions in vector a and vector B, respectively;
the structural integrity = (number of texts containing event information/total number of texts) (number of texts containing time information/total number of texts) (number of texts containing location information/total number of texts), the number of texts containing event information representing the number of texts containing event information, and the total number of texts representing the total number of texts;
the text repetition = number of repeated words or phrases/total number of words, wherein the number of repeated words or phrases is the number of repeated words or phrases detected in the text and the total number of words is the total number of words in the text.
5. The generated AI-cognitive model-based multi-source information display system of claim 4, wherein: the screening module comprehensively calculates the grammar error rate normalization value, the word vector similarity normalization value, the structural integrity normalization value and the text repetition normalization value to obtain a text screening value, wherein the calculation expression is as follows:
In the method, in the process of the invention,for text screen value +_>Normalized values for structural integrity, < >>Normalize the values for syntax error rate,/">Normalizing the value for word vector similarity, +.>Normalizing the numerical value for the text repetition degree;
when text screening value of text dataWhen the text data is not less than 1, the screening module does not screen the text data, and when the text screening value of the text data is +.>When < 1, the screening module does not screen the text data.
6. The generated AI-cognitive model-based multi-source information display system of claim 5, wherein: the screening module obtains a sharpness index, a contrast, a noise level and an exposure degree in the image data, and obtains normalized values of the sharpness index, the contrast, the noise level and the exposure degree.
7. The generated AI-cognitive model-based multi-source information display system of claim 6, wherein: the normalized value acquisition logic of the definition index is as follows: when the definition index is larger than or equal to the definition threshold, the normalized value of the definition index is 1, and when the definition index is smaller than the definition threshold, the normalized value of the definition index is 0;
the normalized value acquisition logic of the contrast ratio is as follows: when the contrast is not in the contrast threshold range, the normalized value of the contrast is 1, and when the contrast is in the contrast threshold range, the normalized value of the contrast is 0;
The normalized value acquisition logic of the noise level is: when the noise level is greater than the noise threshold, the normalized value of the noise level is 1, and when the noise level is less than or equal to the noise threshold, the normalized value of the noise level is 0;
the normalized value acquisition logic of the exposure is: the normalized value of the exposure is 1 when the exposure is not in the exposure threshold range, and 0 when the exposure is in the exposure threshold range.
8. The generated AI-cognitive model-based multi-source information display system of claim 7, wherein: the calculation expression of the definition index is as follows: ci=10×log10 ((max_gradient) 2 ) MSE), where CI is the sharpness index, max_gradient represents the maximum gradient value in the image, MSE represents the mean square error, MSE= (1/N) = (I1 (I, j) -I2 (I, j) 2 ]Wherein I1 (I, j) represents the pixel value of the original image, I2 (I, j) represents the pixel value of the processed image, (I, j) represents the coordinates of the pixels, N represents the total number of pixels, Σ represents the summation of all pixels;
the calculation expression of the contrast is as follows: contrast= (Max-Min)/(max+min), wherein Contrast is Contrast, max represents the maximum pixel value of the image, min represents the minimum pixel value of the image;
The noise level is calculated as: noise=sqrt (mean ((XS-mean (XS)) 2 ) Where Noise is the Noise level, XS represents the pixel value of the image, mean represents the average of the pixel values;
the calculated expression of the exposure is: exposure= (1/M) ΣZI, where Exposure represents Exposure, M represents the total number of pixels in the image, and ΣZI represents the sum of the luminance values of all pixels in the image.
9. The generated AI-cognitive model-based multi-source information display system of claim 8, wherein: the screening module comprehensively calculates the definition index normalization value, the contrast normalization value, the noise level normalization value and the exposure degree normalization value to obtain an image screening value, and the calculation expression is as follows:
the method comprises the steps of carrying out a first treatment on the surface of the In (1) the->Value for image screening ∈ ->For the contrast ratio to be normalized to a value,normalize the values for noise level, +.>Normalizing the value for exposure, +.>Normalizing the value for the sharpness index;
when the image data is the image screening valueThe screening module screens out the image data when > 1, and the image screening value of the image data +.>And when the image data is less than or equal to 1, the screening module does not screen the image data.
CN202310840885.5A 2023-07-11 2023-07-11 Multisource information exhibition system based on generated AI cognitive model Active CN116578763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310840885.5A CN116578763B (en) 2023-07-11 2023-07-11 Multisource information exhibition system based on generated AI cognitive model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310840885.5A CN116578763B (en) 2023-07-11 2023-07-11 Multisource information exhibition system based on generated AI cognitive model

Publications (2)

Publication Number Publication Date
CN116578763A CN116578763A (en) 2023-08-11
CN116578763B true CN116578763B (en) 2023-09-15

Family

ID=87536191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310840885.5A Active CN116578763B (en) 2023-07-11 2023-07-11 Multisource information exhibition system based on generated AI cognitive model

Country Status (1)

Country Link
CN (1) CN116578763B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123363A (en) * 2014-07-21 2014-10-29 北京奇虎科技有限公司 Method and device for extracting main image of webpage
CN104318562A (en) * 2014-10-22 2015-01-28 百度在线网络技术(北京)有限公司 Method and device for confirming quality of internet images
CN107103084A (en) * 2017-04-27 2017-08-29 厦门大学 A kind of gradual parallel image search method of quality assurance
CN112597116A (en) * 2020-12-23 2021-04-02 中国电子信息产业集团有限公司第六研究所 Document sharing intelligent management system under autonomous controllable platform
CN112749813A (en) * 2020-10-29 2021-05-04 广东电网有限责任公司 Data processing system, method, electronic equipment and storage medium
CN113392206A (en) * 2021-06-17 2021-09-14 李元烈 Intelligent editing method for popular culture hot content
CN113610862A (en) * 2021-07-22 2021-11-05 东华理工大学 Screen content image quality evaluation method
WO2022016561A1 (en) * 2020-07-22 2022-01-27 江苏宏创信息科技有限公司 Ai modeling system and method for policy profiling based on big data
CN114155223A (en) * 2021-11-18 2022-03-08 重庆大学 Image definition screening method and system based on directed distance
CN114299294A (en) * 2021-11-15 2022-04-08 北京小来无限科技有限公司 Prediction method, recommendation method and related equipment thereof
CN114419008A (en) * 2022-01-24 2022-04-29 北京译图智讯科技有限公司 Image quality evaluation method and system
CN114818691A (en) * 2021-01-29 2022-07-29 腾讯科技(深圳)有限公司 Article content evaluation method, device, equipment and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123363A (en) * 2014-07-21 2014-10-29 北京奇虎科技有限公司 Method and device for extracting main image of webpage
CN104318562A (en) * 2014-10-22 2015-01-28 百度在线网络技术(北京)有限公司 Method and device for confirming quality of internet images
CN107103084A (en) * 2017-04-27 2017-08-29 厦门大学 A kind of gradual parallel image search method of quality assurance
WO2022016561A1 (en) * 2020-07-22 2022-01-27 江苏宏创信息科技有限公司 Ai modeling system and method for policy profiling based on big data
CN112749813A (en) * 2020-10-29 2021-05-04 广东电网有限责任公司 Data processing system, method, electronic equipment and storage medium
CN112597116A (en) * 2020-12-23 2021-04-02 中国电子信息产业集团有限公司第六研究所 Document sharing intelligent management system under autonomous controllable platform
CN114818691A (en) * 2021-01-29 2022-07-29 腾讯科技(深圳)有限公司 Article content evaluation method, device, equipment and medium
CN113392206A (en) * 2021-06-17 2021-09-14 李元烈 Intelligent editing method for popular culture hot content
CN113610862A (en) * 2021-07-22 2021-11-05 东华理工大学 Screen content image quality evaluation method
CN114299294A (en) * 2021-11-15 2022-04-08 北京小来无限科技有限公司 Prediction method, recommendation method and related equipment thereof
CN114155223A (en) * 2021-11-18 2022-03-08 重庆大学 Image definition screening method and system based on directed distance
CN114419008A (en) * 2022-01-24 2022-04-29 北京译图智讯科技有限公司 Image quality evaluation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全媒体内容质量评价研究综述;颜成钢等;信号处理;第38卷(第6期);第1111-1137页 *

Also Published As

Publication number Publication date
CN116578763A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN110750656B (en) Multimedia detection method based on knowledge graph
JP5782404B2 (en) Image quality evaluation
CN108509436B (en) Method and device for determining recommended object and computer storage medium
US20150066934A1 (en) Automatic classification of segmented portions of web pages
US20080075360A1 (en) Extracting dominant colors from images using classification techniques
AU2015310494A1 (en) Sentiment rating system and method
CN113779308B (en) Short video detection and multi-classification method, device and storage medium
US20120030711A1 (en) Method or system to predict media content preferences
JP2008084151A (en) Information display device and information display method
CN110991403A (en) Document information fragmentation extraction method based on visual deep learning
Almjawel et al. Sentiment analysis and visualization of amazon books' reviews
CN106815253B (en) Mining method based on mixed data type data
CN115580758A (en) Video content generation method and device, electronic equipment and storage medium
US10963690B2 (en) Method for identifying main picture in web page
CN116578763B (en) Multisource information exhibition system based on generated AI cognitive model
CN115168637B (en) Method, system and storage medium for adding label to picture
US20230067628A1 (en) Systems and methods for automatically detecting and ameliorating bias in social multimedia
Wasielewski Authenticity and the Poor Image in the Age of Deep Learning
CN114579876A (en) False information detection method, device, equipment and medium
CN114818639A (en) Presentation generation method, device, equipment and storage medium
CN113869803A (en) Enterprise sensitive information risk assessment method, system and storage medium
Madan et al. Parsing and summarizing infographics with synthetically trained icon detection
WO2022031283A1 (en) Video stream content
CN111062435A (en) Image analysis method and device and electronic equipment
CN111193795A (en) Information pushing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant