CN111160453B - Information processing method, equipment and computer readable storage medium - Google Patents

Information processing method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111160453B
CN111160453B CN201911378909.XA CN201911378909A CN111160453B CN 111160453 B CN111160453 B CN 111160453B CN 201911378909 A CN201911378909 A CN 201911378909A CN 111160453 B CN111160453 B CN 111160453B
Authority
CN
China
Prior art keywords
image
information
neural network
category
artificial neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911378909.XA
Other languages
Chinese (zh)
Other versions
CN111160453A (en
Inventor
李睿易
杜杨洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911378909.XA priority Critical patent/CN111160453B/en
Publication of CN111160453A publication Critical patent/CN111160453A/en
Application granted granted Critical
Publication of CN111160453B publication Critical patent/CN111160453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an information processing method, which comprises the following steps: acquiring a first image; inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs; acquiring a second image; inputting the second image into the artificial neural network model after training to obtain second class information; obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing the difference between the first category information and the second category information. The embodiment of the application also discloses an information processing device and a computer readable storage medium.

Description

Information processing method, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of mobile electronic devices, and in particular, to an information processing method, an information processing device, and a computer readable storage medium.
Background
Due to the fact that the child education mode is achieved through application programs in the electronic equipment, the electronic equipment has good flexibility and great convenience, and more families choose to install various types of application programs in the electronic equipment to achieve family education for children, particularly infants. However, the existing various types of application programs are all based on the types fixedly set by the application program, the preset data of the application program are managed and displayed, and for the data which is input by other ways by a user, is more interesting to the child at the current moment and does not belong to the types fixedly set by the application program, the application program cannot execute the classification and the type difference analysis on the data.
Disclosure of Invention
In view of this, it is desirable to provide an information processing method, apparatus and computer readable storage medium, which can solve the problem that an application program in the relative technology cannot classify data outside its preset classification and analyze the classification difference.
In order to achieve the above purpose, the technical scheme of the application is realized as follows:
an information processing method, the method comprising:
acquiring a first image;
Inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
Acquiring a second image;
inputting the second image into the artificial neural network model after training to obtain second class information;
obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing the difference between the first category information and the second category information.
Optionally, the method further comprises:
Acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet the training ending conditions, so as to obtain the artificial neural network model after training is completed.
Optionally, the method further comprises:
determining a parameter training rule;
determining a training ending condition based on the training rule;
Acquiring a plurality of third images;
And adjusting parameters of the artificial neural network model based on the parameter training rule and the plurality of third images until the parameters of the artificial neural network model meet the training ending condition, so as to obtain the artificial neural network model after training is completed.
Optionally, the inputting the first image into the trained artificial neural network model to obtain the first category information includes:
acquiring characteristic dimension information;
Inputting the first image into the trained artificial neural network model;
And processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
Optionally, the processing the first image using the trained artificial neural network model based on the feature dimension information to obtain first class information includes:
Processing the first image by using the trained artificial neural network model to obtain third-class information;
the first category information is determined based on the feature dimension information and the third category information.
Optionally, the acquiring the second image includes:
Based on historical operating information of the user, the second image is acquired, and/or,
And acquiring the second image based on the first category information.
Optionally, the acquiring the second image based on the first category information includes:
acquiring image association parameters;
Determining target feature information based on the image association parameters and the first category information;
The second image is determined based on the target feature information.
Optionally, the method further comprises:
Editing operation is carried out on the first image and/or the second image, and a third image is obtained;
And inputting the third image into the artificial neural network model after training is completed, and obtaining fourth category information.
An information processing apparatus, the apparatus comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing a program of a data reading method in the memory to realize the following steps:
acquiring a first image;
Inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
Acquiring a second image;
inputting the second image into the artificial neural network model after training to obtain second class information;
obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing the difference between the first category information and the second category information.
A computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the information processing method of any of the preceding claims.
According to the information processing method provided by the embodiment of the application, the first image is acquired, the first image is input into the trained artificial neural network model to obtain first class information, the second image is acquired, the second image is input into the trained artificial neural network model to obtain second class information, and the first difference information is obtained based on the first class information and the second class information. Therefore, according to the information processing method provided by the embodiment of the application, the first image can be processed through the trained artificial neural network model to obtain the first class information, and the second image can be processed through the trained artificial neural network model to obtain the second class information, so that the accurate identification and accurate classification functions of the trained artificial neural network model to the first image and the second image are fully utilized, and the first difference information between the first class information and the second class information is further obtained.
Drawings
FIG. 1 is a schematic diagram of an application program for classifying images in a relative technology;
FIG. 2 is a flowchart of a first information processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a second information processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a third information processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of editing a first image and a second image in an information processing method according to an embodiment of the present invention;
FIG. 6 is a flowchart of a specific implementation of an information processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of acquiring explanatory information in the information processing method according to the embodiment of the present invention;
Fig. 8 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
With the development of networks and the strong functions of terminals and the tension of life pace, ways of realizing home education, particularly for infants, through various application programs on terminals are being adopted by more and more households.
In the current application market, applications related to young children education are diverse. These applications share some common features: the method can realize the classification and management of the data solidified in the application program and the management and display of the data corresponding to the classification category preset in the application program. As shown in fig. 1, the application may perform the function of helping children recognize animals, and some categories such as rabbits, butterflies, bees, horses, dogs, dolphins, monkeys, etc. are stored in the application, and for animals within these categories, the application is basically identifiable, but for animals outside these categories such as tigers, pangolins, etc., the application cannot identify and classify, and even for some animals that are alike, such as dogs in the right half of fig. 1 are easily identified as monkeys, dolphins, horses, and dogs.
The application program of the category cannot perform classification management on data such as pictures input by infants by themselves, particularly cannot perform classification and management on data such as pictures input by infants and not matched with classification categories preset in the application program, and cannot further perform generalization and summarization on classification difference information between the data such as pictures and the classification categories preset in the application program.
Based on this, an embodiment of the present application provides an information processing method implemented by an information processing apparatus, as shown in fig. 2, the information processing method including the steps of:
Step 101, acquiring a first image.
In step 101, the first image may be an image recognizable by the electronic device.
The electronic device may be a mobile electronic device, such as a smart phone, a notebook computer, etc.; the electronic device may also be a smart television.
In one embodiment, the first image may be an image stored by the electronic device itself, for example, an image stored in a file management system of the electronic device.
In one embodiment, the first image may be an image stored in a database of the electronic device.
In one embodiment, the first image may be an image acquired by an image acquisition device of the electronic device. Such as randomly captured images using a camera of an electronic device.
In one embodiment, the first image may be a screen capturing image obtained by capturing a video in a playing state when the electronic device plays the video.
In one embodiment, the first image may be an image with a first target object. Wherein the first target object may be a dog, or a mountain, or a flower, and the first image may be an image with a dog, or a mountain, or a flower, respectively.
In one embodiment, the first image may be an image with N target objects, where N is an integer greater than 2, such as the first image, the second image, and the nth image.
In one embodiment, the first image may be an image that is not classified by an application of the electronic device.
In one embodiment, the first image may be an image that does not match any of the classification categories preset in the application of the electronic device.
Step 102, inputting the first image into a trained artificial neural network model (ARTIFICIAL NEURAL NETWORK, ANN) to obtain first class information.
The first category information is used for representing information of a category to which the first image belongs.
In step 102, the ANN model is a mathematical model that simulates a processing mechanism of a nervous system of a human brain on complex information based on network topology knowledge as a theoretical basis after understanding and abstracting a human brain structure and an external stimulus response mechanism based on a basic principle of a neural network in biology. The ANN model is characterized by parallel distributed processing capacity, high fault tolerance, intelligence, self-learning capacity and the like, combines the processing and storage of information, and draws attention in various discipline fields by a unique knowledge representation mode and intelligent self-adaptive learning capacity. It is in fact a complex network with a large number of simple elements interconnected, with a high degree of nonlinearity, a system capable of complex logic operations and nonlinear relation implementation.
An ANN model is an operational model that is composed of a large number of nodes (or neurons) interconnected. Each node represents a specific output function, called an activation function. The connection between every two nodes represents a weight, called a weight, for the signal passing through the connection, by which means the neural network simulates the human memory. The output of the network is then dependent on the structure of the network, the way the network is connected, the weights and the activation functions. The network itself is usually an approximation to some algorithm or function in nature, and may also be an expression of a logic policy. The construction concept of the neural network is inspired by the operation of the neural network of the living beings. The artificial neural network is realized by combining the knowledge of the biological neural network with a mathematical statistical model and by means of a mathematical statistical tool. On the other hand, in the field of artificial perception of artificial intelligence, a neural network can have decision capability similar to a person and simple judgment capability by a mathematical statistics method, and the method is a further extension of traditional logic calculation.
The ANN model has the following basic characteristics:
High degree of parallelism: the artificial neural network is formed by combining a plurality of identical simple processing units in parallel, and the parallel processing capability and effect of a large number of simple neurons are quite striking although the function of each neuron is simple. The artificial neural network is similar to the human brain, not only is it structurally parallel, but its processing sequence is also parallel and simultaneous. The processing units within the same layer are all operating simultaneously, i.e. the computational functions of the neural network are distributed over a plurality of processing units, whereas a typical computer typically has one processing unit whose processing order is serial.
Highly nonlinear global effects: each neuron of the artificial neural network receives the input of a large number of other neurons, and generates output through the parallel network, so that other neurons are influenced, the mutual restriction and mutual influence among the networks are realized, the nonlinear mapping from the input state to the output state space is realized, and the overall performance of the network is not superposition of the local performance of the network from the global point of view, but shows a certain collective behavior.
Associative memory function and good fault tolerance: the artificial neural network stores the processed data information in weights among neurons through a unique network structure of the artificial neural network, has an associative memory function, and cannot see the stored information content from a single weight, so that the artificial neural network is in a distributed storage form, has good fault tolerance, can perform mode information processing such as feature extraction, defect mode restoration, cluster analysis and the like, and can perform mode associative, classification and identification. It can learn from imperfect data and graphics and make decisions. Since knowledge exists in the whole system, not just one storage unit, the nodes with the reserved proportion do not participate in operation, and the performance of the whole system is not greatly affected. Can process noisy or incomplete data, has generalization function and strong fault tolerance.
Good self-adaption and self-learning functions: the artificial neural network obtains the weight and the structure of the network through learning and training, and has strong self-learning capability and self-adaptation capability to the environment. The neural network has a self-learning process that simulates a human visual thinking method, which is a non-logical non-language completely different from traditional symbolic logic. Adaptivity finds the inherent relationship between input and output through learning and training according to the provided data, so as to solve the problem, instead of according to the empirical knowledge and rules of the problem, thus having an adaptive function, which is very beneficial for weakening the weight determination human factor.
And (3) storing the distribution of the knowledge: in neural networks, knowledge is not stored in a specific memory unit, but is distributed throughout the system, and many links are required to store multiple pieces of knowledge.
Non-convexity: the direction of evolution of a system will depend on a particular state function under certain conditions. Such as an energy function, the extremum of which corresponds to a relatively steady state of the system. Non-convexity means that such a function has multiple extrema, so the system has multiple more stable equilibrium states, which will lead to a diversity of system evolution.
The ANN model also has the following intelligent characteristics:
associative memory function: because the neural network has the capability of storing information in a distributed manner and calculating in parallel, the neural network has the capability of associatively memorizing external stimulus and input information.
Classification and identification functions: the neural network has strong recognition and classification capability on the external input samples. Classification of input samples is actually to find out the partitioned areas meeting the classification requirement in the sample space, and the samples in each area belong to one class.
Optimizing the calculation function: optimization computation refers to finding a set of combinations of parameters under known constraints to minimize the objective function determined by the combination.
Nonlinear mapping function: the ANN model with reasonable design can approach any complex nonlinear function with any precision in theory through training and learning the input and output samples of the system. This superior performance of neural networks makes it a versatile mathematical model as a multidimensional nonlinear function.
The transfer functions and transfer functions of the neurons of the ANN model were determined when it was constructed. The transfer function cannot be changed during the learning of the ANN model, so if one wants to change the result of the ANN model output, one can only achieve this by changing the input of the weighted sum. Because the neurons can only respond to the input signals of the ANN model, the weighted input of the ANN model can only be changed to modify the weight parameters of the network neurons, so the training of the ANN model is the process of changing the weight matrix.
Training of the ANN model can be achieved through deep learning, the deep learning algorithm breaks through the limitation of the traditional neural network on the number of layers, and the number of layers of the network can be selected according to the needs of a designer. The ANN model obtained through deep learning training not only greatly improves the accuracy of image recognition, but also avoids the work of manually extracting the characteristics which consumes a great amount of time, and greatly improves the on-line operation efficiency.
In step 102, the trained ANN model may be an ANN model trained through deep learning.
In one embodiment, the trained ANN model may be one that is trained through deep learning and that is identifiable against data of interest to the child.
In one embodiment, the trained ANN model may be one that is trained through deep learning and that is identifiable for target object data in images of interest to the child.
In step 102, the first category information used to represent the information of the category to which the first image belongs may be category information to which a first target object in the first image belongs, for example, whether the first target object is an animal or a plant.
In one embodiment, the first type information used for representing the type of the first image may be type information of at least two target objects in the first image, such as type information of the first target object and the mth target object, where M is an integer greater than or equal to 2.
Step 103, acquiring a second image.
In step 103, the second image may be an image stored in the application.
In one embodiment, the second image may be an image that has been classified in the application.
And 104, inputting the second image into the trained ANN model to obtain second class information.
The second category information is used for representing information of a category to which the second image belongs;
the second category information for indicating the category to which the second image belongs, which is acquired in step 104, may be category information to which a second target object in the second image belongs, for example, whether the second target object is an animal or a plant.
In one embodiment, the second category information used to represent the information of the category to which the second image belongs may be category information to which at least two target objects in the second image belong, such as category information to which the second target object and the mth target object belong, where M is an integer greater than or equal to 2.
Step 105, obtaining the first difference information based on the first category information and the second category information.
The first difference information is used for representing the difference between the first category information and the second category information.
In step 105, the first difference information, which is used to represent the difference between the first category information and the second category information, may be a set of difference information of all corresponding information items in the first category information and the second category information.
In one embodiment, the first difference information for representing the difference between the first category information and the second category information may be a set of difference information of a part of corresponding information items in the first category information and the second category information.
In one embodiment, the first difference information for representing the difference between the first category information and the second category information may be a set of difference information of corresponding information items preset in the first category information and the second category information.
In one embodiment, the first difference information for representing the difference between the first category information and the second category information may be obtained by the information processing device displaying the first category information, receiving a selection instruction for an information item in the first category information, obtaining a first target information item list, searching for a corresponding second target information item list in the second category information based on the first target information item list, and then displaying each information item in the first target information item list and each information item in the second target information item list.
In one embodiment, the first difference information for representing the difference between the first category information and the second category information may be obtained by the information processing device displaying the second category information, receiving a selection instruction for an information item in the second category information, obtaining a second target information item list, searching for a corresponding first target information item list in the first category information based on the second target information item list, and then displaying each information item in the second target information item list and each information item in the first target information item list.
In one embodiment, the first difference information for representing the difference between the first category information and the second category information may be obtained by the information processing apparatus analyzing and summarizing each information item in the first target information item list and each information item in the second target information item list.
According to the information processing method provided by the embodiment of the application, the first image is acquired, the first image is input into the trained artificial neural network model to obtain first class information, the second image is acquired, the second image is input into the trained artificial neural network model to obtain second class information, and the first difference information is obtained based on the first class information and the second class information. Therefore, according to the information processing method provided by the embodiment of the application, the first image can be processed through the trained artificial neural network model to obtain the first class information, and the second image can be processed through the trained artificial neural network model to obtain the second class information, so that the accurate identification and accurate classification functions of the trained artificial neural network model to the first image and the second image are fully utilized, and the first difference information between the first class information and the second class information is further obtained.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method, as shown in fig. 3, including the steps of:
step 201, acquiring a first image.
And 202, acquiring characteristic dimension information.
In step 202, feature dimension information may be used to represent target feature information entries in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be used to represent an item of the feature information item in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be used to represent at least two items of feature information items in the category information corresponding to the first image.
In one embodiment, the feature dimension information may be at least one item of feature information obtained by the information processing device performing preliminary recognition on the first image.
In one embodiment, the feature dimension information may be obtained by the information processing apparatus performing preliminary recognition on the first image, presenting the obtained multi-item subject feature information item, and then receiving a selection operation for the multi-item subject feature information item.
In one embodiment, the feature dimension information may be a target feature information item that the information processing apparatus receives user input.
Step 203, inputting the first image into the trained ANN model.
And 204, processing the first image by using the trained ANN model based on the characteristic dimension information to obtain first class information.
In step 204, the feature dimension information may be used as control information, and input into the ANN model after training is completed, and the ANN model is controlled to process the first image, so as to obtain the first class information.
In one embodiment, the feature dimension information may be input into the trained ANN model as additional information of the first image, and the first image may be processed using the trained ANN model to obtain the first class information.
In one embodiment, step 204 may be implemented by:
processing the first image by using the trained ANN model to obtain third category information; the first category information is determined based on the feature dimension information and the third category information.
Specifically, the first image may be directly input into the ANN model to obtain third category information, then, based on the feature dimension information, a target feature information item matched with the feature dimension information is selected from the third category information, and the target feature information items are summarized to obtain the first category information.
Step 205, acquiring a second image based on the historical operation information of the user, and/or acquiring the second image based on the first category information.
In step 205, the historical operation information of the user may be the historical operation information of the user in the current application program.
In one embodiment, the historical operating information of the user may be historical operating information of the user in any application program in the electronic device, for example, historical operating information of the user in the file manager.
In a real-time manner, the historical operation information of the user may be historical operation information of the user in any application program except the current application program in the electronic device, for example, online browsing operation performed by the user in a browser.
In one embodiment, the historical operating information of the user may be historical operating information that the user performed on the electronic device for a predetermined period of time.
In one embodiment, the user's historical operating information may be a particular type of historical operating information that the user has performed on the electronic device over a certain preset period of time.
In one embodiment, the historical operation information of the user may be historical operation information of browsing pictures performed on the electronic device by the user within a certain preset period of time.
In step 205, the second image acquired based on the historical operation information of the user may be any image acquired based on the historical operation information of the user and unrelated to the first image.
In one embodiment, the second image acquired based on the historical operation information of the user may be any image acquired based on the historical operation information of the user and related to the first image.
In step 205, the second image acquired may be any image unrelated to the first image.
In one embodiment, the second image acquired may be any image related to the first image.
In one embodiment, the second image acquired may be an and image stored in the current application.
Illustratively, the acquiring the second image based on the first category information in step 205 may be implemented by:
And A1, acquiring image association parameters.
In step A1, an image association parameter is used to indicate the degree of association with the first category information. The larger the image association parameter is, the stronger the association with the first type information is, namely the association between the second image to be acquired and the first image is strong, namely the second image is close to the type to which the first image belongs; conversely, the second image is relatively far from the category to which the first image belongs.
In one embodiment, the image association parameters may be preset in the current application.
In one embodiment, the image association parameters may be set by the user based on hobbies.
In one embodiment, the image association parameters may be set by the user based on the need for image recognition.
In one embodiment, the values of the image-associated parameters are adjustable.
And A2, determining target characteristic information based on the image association parameters and the first category information.
In one embodiment, the target feature information in step A2 may be category information corresponding to each feature information item in the first category information obtained based on the image association parameter and the first category information.
In one embodiment, the target feature information in step A2 may be determined by determining fourth category information based on the image-related parameter and the second category information, selecting feature information items from the fourth category information, and summarizing the feature information items.
In one embodiment, step A2 may be implemented by determining fourth category information based on the image association parameter and the second category information, and receiving the target feature information determined by the user selecting the category information item in the fourth category information.
And A3, determining a second image based on the target characteristic information.
In one embodiment, step A3 may be a second image obtained by searching the image according to the target feature information in the database of the current application program.
And 206, inputting the second image into the trained ANN model to obtain second class information.
Step 207, obtaining the first difference information based on the first category information and the second category information.
According to the information processing method provided by the embodiment of the application, the first image is acquired, the characteristic dimension information is acquired, the first image is input into the trained ANN model, the first image is processed by using the trained ANN model based on the characteristic dimension information to obtain the first type information, the second image is acquired based on the historical operation information of the user, and/or the second image is acquired based on the first type information, the second image is input into the trained ANN model to obtain the second type information, and then the first difference information is obtained based on the first type information and the second type information. Therefore, according to the information processing method provided by the embodiment of the application, the first image is processed by using the trained ANN model according to the characteristic dimension information to obtain the first class information, so that the acquisition of the first characteristic information can be flexibly adjusted according to the characteristic dimension information, then the second image is acquired according to the historical operation record of the user and/or the first class information, the second image is input into the trained ANN model to obtain the second class information, and finally the first difference information of the difference between the first class information and the second class information is obtained.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing method, as shown in fig. 4, including the steps of:
step 301, acquiring a first image.
Step 302, inputting the first image into the trained ANN model to obtain first class information.
The first category information is used for representing information of a category to which the first image belongs.
Illustratively, training of the ANN model needs to be completed prior to steps 301-302.
In the embodiment of the application, training of the ANN model can be realized through the steps B1-B2:
and B1, acquiring image sample data.
In step B1, the acquired image sample data may be an image including various types of information.
In one embodiment, the acquired image sample data may be an image from which certain specific types of information are removed.
And B2, adjusting parameters of the ANN model based on the image sample data until the parameters of the ANN model meet the training ending conditions, and obtaining the artificial neural network model after training is completed.
In step B2, the training end condition may be a preset condition.
In one embodiment, the training end condition may be an error threshold value between a preset training result and an expected result.
In one embodiment, the training end condition may be a preset error threshold for an ANN model classification error.
Specifically, the training result of the ANN model shown in step B2 is a supervised ANN model training learning method, which may also be referred to as an error correction training method. According to the method, firstly, a training objective function, namely a training ending condition, is set, then, network connection weight correction is carried out according to errors of actual output and expected output of the ANN model, so that the output error of the final ANN model is smaller than the training objective function, namely the training ending condition, the output of the ANN model meets the expected effect, and finally, the trained ANN model is obtained.
In the embodiment of the application, training of the ANN model can also be realized through steps C1-C4:
And C1, determining a parameter training rule.
In step C1, a parameter training rule is used to represent a rule for training an ANN model.
In one embodiment, the parameter training rules may include the expected results of the ANN model processing various types of data.
In one embodiment, the parameter training rules may include error thresholds for the ANN model when processing various types of data.
In one embodiment, the parameter training rules may include a manner of convergence of processing errors by the ANN model for various types of data processing.
And C2, determining a training ending condition based on the training rule.
In step C2, the training end condition may be an error threshold between the training result and the expected result of the ANN model.
In one embodiment, the training end condition may be an error threshold for an ANN model classification error.
And C3, acquiring a plurality of third images.
In step C3, the plurality of third images may be a plurality of images input by the user in the current application.
In one embodiment, the plurality of third images may be a plurality of images stored in the current application.
In one embodiment, the plurality of third images may be a plurality of images of different types.
In one embodiment, the plurality of third images may be a plurality of images of the same type.
In one embodiment, the plurality of third images may be a plurality of images operated by a user in other applications of the electronic device.
And C4, adjusting parameters of the ANN model based on the parameter training rule and the plurality of third images until the parameters of the ANN meet the training ending conditions, and obtaining the trained ANN model.
Specifically, the training method described in step C4 is an unsupervised learning training process of the ANN model. The method mainly comprises the steps of carrying out self-organizing learning of the ANN model according to some provided samples, wherein the learning process does not have expected output, and responds to the external stimulus mode through mutual competition of neurons of the ANN model, so that the adjustment of the network weight of the ANN model is realized to adapt to the input sample data.
In practical application, for the unsupervised learning training of the ANN model, the ANN model may be directly set in the application environment, and the training stage and the application stage are combined into one.
Step 303, acquiring a second image.
And step 304, inputting the second image into the trained ANN model to obtain second class information.
Step 305, obtaining the first difference information based on the first category information and the second category information.
The first difference information is used for representing the difference between the first category information and the second category information.
And 306, executing editing operation on the first image and/or the second image to obtain a third image.
In step 306, the first image is edited to obtain a third image, which may be obtained by removing the picture information of the first area of the first image.
In one embodiment, the editing operation is performed on the first image to obtain the third image, which may be that the selecting operation is performed on the first area of the first image, and the selected image is the third image.
In one embodiment, the first image is edited to obtain a third image, and the fourth image may be selected and the first region of the first image is replaced with the fourth image to obtain the third image.
In step 306, the second image is edited to obtain a third image, which may be obtained by removing the picture information of the second area of the second image.
In one embodiment, the second image is edited to obtain the third image, which may be a selection operation of the second area of the second image, where the selected image is the third image.
In one embodiment, the second image is edited to obtain a third image, and the fourth image may be selected and the second region of the second image may be replaced with the fourth image to obtain the third image.
In step 306, an editing operation is performed on the first image and the second image to obtain a third image, which may be selecting a first area of the first image to obtain a fifth image, selecting a second area of the second image to obtain a sixth image, then replacing the first area of the first image with the sixth image to obtain a seventh image, and replacing the second area of the second image with the fifth image to obtain an eighth image. Alternatively, the third image may be a seventh image, or the third image may be an eighth image.
In one embodiment, the editing operation is performed on the first image and the second image to obtain a third image, a first area of the first image may be selected to obtain a fifth image, a second area of feature information corresponding to target feature information at the first area of the first image is selected in the second image to obtain a sixth image, then the first area of the first image is replaced with the sixth image to obtain a seventh image, and the second area of the second image is replaced with the fifth image to obtain an eighth image. Alternatively, the third image may be a seventh image, or the third image may be an eighth image.
Specifically, as shown in fig. 5, the head characteristic information region in the left image in fig. 5, that is, the first image, is selected, and the head characteristic information region in the middle image, that is, the second image, is also selected, and then the head in the first image is replaced with the head in the second image, thereby obtaining the third image in the right portion in fig. 5. In the third image shown in the right part of fig. 5, both the characteristic information of the body part of the bird in the left image in fig. 5, i.e. the first image, and the head characteristic information of the second image shown in the middle part of fig. 5 are included.
Step 307, inputting the third image into the trained ANN model to obtain fourth category information.
Specifically, step 307 may be implemented by the operations of step 302 or step 304.
Illustratively, in step 307, the characteristic information of the edited portion in the third image may also be used as part of the fourth category information.
In one embodiment, the fourth category information may be characteristic information of an edited portion in the third image.
In one embodiment, the fourth category information may be feature information of a portion other than the edited portion in the third image.
In one embodiment, before step 307 is performed, the child may distinguish the third image, and then perform step 307 to process the third image to obtain the fourth category information, thereby increasing the interest of the whole information processing method.
According to the information processing method provided by the embodiment of the application, the first image is acquired, the first image is input into the trained ANN model to obtain first class information, the second image is input into the trained ANN model to obtain second class information, the first difference information is obtained based on the first class information and the second class information, editing operation is performed on the first image and/or the second image to obtain a third image, and the third image is input into the trained ANN model to obtain fourth class information. Therefore, the information processing method provided by the embodiment of the application not only can realize classification of the first image and the second image, but also can realize identification processing of the class information of the third image after editing, so that the information processing method provided by the embodiment of the application can realize classification and class difference analysis of any image.
Based on the foregoing embodiments, the embodiment of the present application provides a specific implementation flow of an information processing method, as shown in fig. 6. When the information processing method starts to be executed, detecting an input source of an image, judging whether the image is an image input by a user, and if the image is the image input by the user, judging that a first image input currently is the image of the user; if the image is not the image input by the user, judging that the first image input currently is the existing image in the database. And inputting the first image which is currently input into the ANN model for processing to obtain first type information of the first image, interpreting the first type information to obtain a type information interpretation text, and outputting the type information interpretation text.
The category information interpretation text may be description information of the first image, definition of the first category, interpretation information which is a cause of dividing the first image into the first category, and the like. Specifically, as shown in fig. 7. In fig. 7, it is explained in detail how the category definition information is obtained from the input image, and further the interpretation information is obtained from the input image and the category definition information.
In the left part of fig. 7, there is a two-dimensional coordinate system of category information and image information, through which image description information of any one image (e.g., first image, second image) input by the information processing apparatus can be obtained, and category definition information (e.g., first category information, second category information, etc.) obtained after inputting any one image (e.g., first image, second image) to the ANN model can be obtained, and interpretation information can be obtained on the basis of the image description information and the category definition information.
As shown in the right part of fig. 7, a first picture is input in the right part of fig. 7, namely, a first picture is north america grebe, and the image description information corresponding to the first picture is: this is a large bird with a white neck and a black back; the category definition information obtained after the image is input to the ANN model is as follows: north America grebe is a waterfowl with yellow sharp beak, neck and white and black back on abdomen; the interpretation information obtained according to the image description information and the category definition information is as follows: this is north america grebe because the bird has a long neck, a yellow sharp beak, and red eyes.
In the right part of fig. 7, the second picture is input into the picture, namely the first picture, which is Hu Wujiu, and the image description information corresponding to the image is: this is a large bird with a white belly and black wings; the category definition information obtained after the image is input to the ANN model is as follows: hu Wujiu is a hooked seabird with a yellow beak, a white abdomen, a black back. The interpretation information obtained according to the image description information and the category definition information is as follows: this is a bird griffon vulture because it has large spokes, a yellow beak in the shape of a curved hook, and a white abdomen.
In the right part of fig. 7, a third picture is input, namely, a first picture, which is Hu Wujiu, and the image description information corresponding to the image is: image description information: this is a large bird with a white belly and black back; the category definition information obtained after the image is input to the ANN model is as follows: hu Wujiu is a hooked seabird with a yellow beak, a white abdomen, a black back. The interpretation information obtained according to the image description information and the category definition information is as follows: this is a bird griffon vulture because it has a yellow beak like hook, a white abdomen and a black back.
It should be noted that, although the second picture and the third picture on the right side of fig. 7 are not highly similar, the ANN model can still accurately identify the object, that is Hu Wujiu, in the image. In addition, the image description information in the embodiment of the application can be obtained by performing image recognition on the input image.
Optionally, in fig. 6, the second image may be further input to the information processing device, and the ANN model processes the second image to obtain second class information, and on the basis of the first class information and the second class information, difference information between the first class information and the second class information may also be obtained.
In fig. 6, any one of the images input may be an image captured by the user himself or an image browsed or saved by the user in another application, an image inherent to the current application itself, or an image obtained by editing the original image as described in the above embodiment.
Alternatively, in fig. 6, the interpretation information and/or the difference information may be output in a voice manner.
The specific implementation flow of the information processing method provided by the embodiment of the application can process any image to obtain the image description information of the image, input the image into an ANN model to obtain the category information corresponding to the image, and then obtain the interpretation information and the difference information of the category information corresponding to other images based on the category information and the image description information.
Based on the foregoing embodiments, an embodiment of the present application provides an information processing apparatus 4, as shown in fig. 8, the information processing apparatus 4 including: a processor 41, a memory 42, and a communication bus 43;
wherein the communication bus 43 is used for realizing communication connection between the processor 41 and the memory 42;
The processor 41 is configured to execute a program of a data reading method in the memory 42 to realize the steps of:
acquiring a first image;
inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
Acquiring a second image;
inputting the second image into the trained artificial neural network model to obtain second class information;
Obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing the difference between the first category information and the second category information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
Acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet the training ending conditions, thereby obtaining the trained artificial neural network model.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
determining a parameter training rule;
Determining a training ending condition based on the training rule;
Acquiring a plurality of third images;
And adjusting parameters of the artificial neural network model based on the parameter training rule and the plurality of third images until the parameters of the artificial neural network model meet the training ending conditions, and obtaining the trained artificial neural network model.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
Inputting the first image into the trained artificial neural network model to obtain first class information, wherein the first class information comprises:
acquiring characteristic dimension information;
inputting the first image into the trained artificial neural network model;
and processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
Based on the feature dimension information, processing the first image by using the trained artificial neural network model to obtain first class information, wherein the method comprises the following steps of:
Processing the first image by using the trained artificial neural network model to obtain third-class information;
The first category information is determined based on the feature dimension information and the third category information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
Acquiring a second image, comprising:
Based on the historical operating information of the user, a second image is acquired, and/or,
Based on the first category information, a second image is acquired.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
based on the first category information, acquiring a second image includes:
acquiring image association parameters;
determining target feature information based on the image association parameters and the first category information;
a second image is determined based on the target feature information.
In other embodiments of the present application, the processor 41 is configured to execute a program of a data reading method in the memory 42 to implement the following steps:
Editing the first image and/or the second image to obtain a third image;
And inputting the third image into the trained artificial neural network model to obtain fourth category information.
The information processing device provided by the embodiment of the application acquires a first image, inputs the first image into the trained artificial neural network model to obtain first class information, acquires a second image, inputs the second image into the trained artificial neural network model to obtain second class information, and obtains first difference information based on the first class information and the second class information. Therefore, the information processing device provided by the embodiment of the application can process the first image through the trained artificial neural network model to obtain the first class information, and can process the second image through the trained artificial neural network model to obtain the second class information, so that the accurate identification and accurate classification functions of the trained artificial neural network model to the first image and the second image are fully utilized, and the first difference information between the first class information and the second class information is further obtained.
Based on the foregoing embodiments, the present application further provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of any of the information processing methods of the foregoing embodiments.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the embodiments of the products provided by the application can be combined arbitrarily under the condition of no conflict to obtain new embodiments of the products.
The features disclosed in the embodiments of the method or the device provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new embodiment of the method or the device.
The computer readable storage medium may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or the like; but may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. An information processing method, the method comprising:
acquiring a first image, wherein the first image is an image which is not classified by an application program of the electronic equipment or an image which is not matched with any one of classification categories preset in the application program of the electronic equipment;
Inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
Acquiring a second image, wherein the second image comprises an image classified in the application program;
inputting the second image into the artificial neural network model after training to obtain second class information;
obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing a set of difference information of corresponding information items in the first category information and the second category information.
2. The method according to claim 1, wherein the method further comprises:
Acquiring image sample data;
and adjusting parameters of the artificial neural network model based on the image sample data until the parameters of the artificial neural network model meet the training ending conditions, so as to obtain the artificial neural network model after training is completed.
3. The method according to claim 1, wherein the method further comprises:
determining a parameter training rule;
determining a training ending condition based on the training rule;
Acquiring a plurality of third images;
And adjusting parameters of the artificial neural network model based on the parameter training rule and the plurality of third images until the parameters of the artificial neural network model meet the training ending condition, so as to obtain the artificial neural network model after training is completed.
4. The method of claim 1, wherein inputting the first image into a trained artificial neural network model to obtain the first class information comprises:
acquiring characteristic dimension information;
Inputting the first image into the trained artificial neural network model;
And processing the first image by using the trained artificial neural network model based on the characteristic dimension information to obtain first class information.
5. The method of claim 4, wherein the processing the first image using the trained artificial neural network model based on the feature dimension information to obtain first class information comprises:
Processing the first image by using the trained artificial neural network model to obtain third-class information;
the first category information is determined based on the feature dimension information and the third category information.
6. The method of claim 1, wherein the acquiring the second image comprises:
Based on historical operating information of the user, the second image is acquired, and/or,
And acquiring the second image based on the first category information.
7. The method of claim 6, wherein the acquiring the second image based on the first category information comprises:
acquiring image association parameters;
Determining target feature information based on the image association parameters and the first category information;
The second image is determined based on the target feature information.
8. The method according to claim 1, wherein the method further comprises:
Editing operation is carried out on the first image and/or the second image, and a third image is obtained;
And inputting the third image into the artificial neural network model after training is completed, and obtaining fourth category information.
9. An information processing apparatus, characterized in that the apparatus comprises: a processor, a memory, and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is used for executing a program of a data reading method in the memory to realize the following steps:
acquiring a first image, wherein the first image is an image which is not classified by an application program of the electronic equipment or an image which is not matched with any one of classification categories preset in the application program of the electronic equipment;
Inputting the first image into the trained artificial neural network model to obtain first class information; the first category information is used for representing information of a category to which the first image belongs;
Acquiring a second image, wherein the second image comprises an image classified in the application program;
inputting the second image into the artificial neural network model after training to obtain second class information;
obtaining first difference information based on the first category information and the second category information; the first difference information is used for representing a set of difference information of corresponding information items in the first category information and the second category information.
10. A computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the information processing method of any one of claims 1 to 8.
CN201911378909.XA 2019-12-27 2019-12-27 Information processing method, equipment and computer readable storage medium Active CN111160453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911378909.XA CN111160453B (en) 2019-12-27 2019-12-27 Information processing method, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911378909.XA CN111160453B (en) 2019-12-27 2019-12-27 Information processing method, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111160453A CN111160453A (en) 2020-05-15
CN111160453B true CN111160453B (en) 2024-06-21

Family

ID=70558651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911378909.XA Active CN111160453B (en) 2019-12-27 2019-12-27 Information processing method, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111160453B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110309715A (en) * 2019-05-22 2019-10-08 北京邮电大学 Indoor orientation method, the apparatus and system of lamps and lanterns identification based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020867B2 (en) * 2012-11-06 2015-04-28 International Business Machines Corporation Cortical simulator for object-oriented simulation of a neural network
CN105512624B (en) * 2015-12-01 2019-06-21 天津中科智能识别产业技术研究院有限公司 A kind of smiling face's recognition methods of facial image and its device
CN108564066B (en) * 2018-04-28 2020-11-27 国信优易数据股份有限公司 Character recognition model training method and character recognition method
CN108875821A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN110188613A (en) * 2019-04-28 2019-08-30 上海鹰瞳医疗科技有限公司 Image classification method and equipment
CN110288049B (en) * 2019-07-02 2022-05-24 北京字节跳动网络技术有限公司 Method and apparatus for generating image recognition model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110309715A (en) * 2019-05-22 2019-10-08 北京邮电大学 Indoor orientation method, the apparatus and system of lamps and lanterns identification based on deep learning

Also Published As

Publication number Publication date
CN111160453A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
Mique Jr et al. Rice pest and disease detection using convolutional neural network
KR20180117704A (en) Structural learning in cone-ballistic neural networks
US11983917B2 (en) Boosting AI identification learning
CN113761259A (en) Image processing method and device and computer equipment
WO2021135546A1 (en) Deep neural network interpretation method and device, terminal, and storage medium
CN109522970B (en) Image classification method, device and system
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
Wang et al. Crop pest detection by three-scale convolutional neural network with attention
Zainudin et al. A Framework for Chili Fruits Maturity Estimation using Deep Convolutional Neural Network.
CN111160453B (en) Information processing method, equipment and computer readable storage medium
Dineva et al. Applying machine learning against beehives dataset
Everett et al. Protocaps: A fast and non-iterative capsule network routing method
Chicchón Apaza et al. Semantic segmentation of weeds and crops in multispectral images by using a convolutional neural networks based on u-net
Rekabdar et al. Scale and translation invariant learning of spatio-temporal patterns using longest common subsequences and spiking neural networks
Vu et al. HCt-SNE: Hierarchical constraints with t-SNE
CN111523598A (en) Image recognition method based on neural network and visual analysis
de Lima et al. Evisclass: a new evaluation method for image data stream classifiers
Živković Plant classification using firefly algorithm and support vector machine
KR102636461B1 (en) Automated labeling method, device, and system for learning artificial intelligence models
CN115482419B (en) Data acquisition and analysis method and system for marine fishery products
CN113378993B (en) Artificial intelligence based classification method, apparatus, device and storage medium
Natesan et al. Birds Egg Recognition using Artificial Neural Network
CN112699909B (en) Information identification method, information identification device, electronic equipment and computer readable storage medium
Aaron et al. Development of knowledge based model for the diagnosis of sorghum diseases using Rule-Base approach
Ahmed An improved self organizing map using jaccard new measure for textual bugs data clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant