CN114464294A - Pet recipe generation method based on image recognition and related device - Google Patents

Pet recipe generation method based on image recognition and related device Download PDF

Info

Publication number
CN114464294A
CN114464294A CN202111582666.9A CN202111582666A CN114464294A CN 114464294 A CN114464294 A CN 114464294A CN 202111582666 A CN202111582666 A CN 202111582666A CN 114464294 A CN114464294 A CN 114464294A
Authority
CN
China
Prior art keywords
target
pet
image
information
target pet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111582666.9A
Other languages
Chinese (zh)
Inventor
彭永鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Ruipeng Pet Healthcare Group Co Ltd
Original Assignee
New Ruipeng Pet Healthcare Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Ruipeng Pet Healthcare Group Co Ltd filed Critical New Ruipeng Pet Healthcare Group Co Ltd
Priority to CN202111582666.9A priority Critical patent/CN114464294A/en
Publication of CN114464294A publication Critical patent/CN114464294A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/60ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Nutrition Science (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pet recipe generation method based on image recognition and a related device, wherein the method comprises the following steps: acquiring at least one image containing different parts of a target pet, wherein each image comprises at least one part of the target pet; extracting features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image; splicing the target characteristic graphs of the at least one image to serve as input data of a first network, and performing multi-task classification through the first network to obtain a plurality of characteristic information of the target pet; and generating a recipe of the target pet according to the plurality of characteristic information of the target pet. By implementing the embodiment of the application, the pertinence and the accuracy of the pet recipe are improved.

Description

Pet recipe generation method based on image recognition and related device
Technical Field
The application relates to the technical field of computers, in particular to a pet recipe generation method based on image recognition and a related device.
Background
With the development of the times, raising pets becomes one of the hobbies that people are close to nature and meet the psychological needs of human beings. Raising the pet can bring happiness to people, but can also bring trouble to people. Such as foods that are needed to provide healthy nutrition to pets when fed to them. However, when a pet is fed by many people, the pet diet is searched from the network, and the mode is single and has no pertinence.
Disclosure of Invention
The embodiment of the application provides a pet recipe generation method and a related device based on image recognition, and aims to improve the pertinence and accuracy of a pet recipe.
The application provides a pet recipe generation method based on image recognition, which comprises the following steps:
acquiring at least one image containing different parts of a target pet, wherein each image comprises at least one part of the target pet;
extracting the features of each image through a pre-established residual error network, and splicing the features extracted from the convolutional layers with different depths to obtain a target feature map of each image;
splicing the target characteristic graphs of the at least one image to serve as input data of a first network, and performing multi-task classification through the first network to obtain a plurality of characteristic information of the target pet;
and generating a recipe of the target pet according to the plurality of characteristic information of the target pet.
In a second aspect, the present application provides an apparatus for generating a pet recipe based on image recognition, the apparatus comprising an obtaining module and a processing module,
the acquisition module is used for acquiring at least one image containing different parts of a target pet, and each image comprises at least one part of the target pet;
the processing module is used for extracting the features of each image through a pre-established residual error network and splicing the features extracted by the convolutional layers with different depths to obtain a target feature map of each image;
the processing module is further used for splicing the target characteristic graphs of the at least one image to serve as input data of a first network, so that multitask classification can be carried out through the first network, and a plurality of characteristic information of the target pet can be obtained;
the processing module is further used for generating a recipe of the target pet according to the plurality of characteristic information of the target pet.
A third aspect of the application provides an electronic device for pet recipe generation based on image recognition, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are generated for execution by the processor to execute instructions of the steps in any one of the methods of the pet recipe generation based on image recognition.
A fourth aspect of the present application provides a computer readable storage medium for storing a computer program for execution by the processor to implement the method of any one of the image recognition based pet recipe generation methods.
It can be seen that, in the above technical scheme, by acquiring at least one image containing different parts of the target pet, feature extraction can be performed on each image through a pre-established residual error network, and features extracted from convolutional layers of different depths are spliced to obtain a target feature map of each image, so that richness and comprehensiveness of the extracted features are increased, and therefore, the target feature maps of at least one image are spliced to serve as input data of a first network, so that multitask classification is performed through the first network, and a plurality of feature information of the target pet can be obtained more accurately, namely, a result of the multitask classification is more accurate. Meanwhile, when the recipe of the target pet is generated according to the plurality of characteristic information of the target pet, the pertinence and the accuracy of the pet recipe are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1A is a schematic diagram of a pet recipe generation system based on image recognition according to an embodiment of the present application;
FIG. 1B is a schematic diagram of another image recognition-based pet recipe generation system provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of a pet recipe generation method based on image recognition according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an MMOE network provided by an embodiment of the present application;
FIG. 4 is a schematic flowchart illustrating a pet recipe generation method based on image recognition according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a pet recipe generation apparatus based on image recognition according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first" and "second" in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Referring to fig. 1A, fig. 1A is a schematic diagram of a pet recipe generation system based on image recognition according to an embodiment of the present application, and the pet recipe generation system based on image recognition 100 may include a pet recipe generation device 110 based on image recognition. The image recognition-based pet recipe generation device 110 is used to process, store, etc. status data. The pet recipe generation system 100 based on image recognition may include an integrated single device or multiple devices, and for convenience of description, the pet recipe generation system 100 based on image recognition is collectively referred to as an electronic device. It will be apparent that the electronic device may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem having wireless communication capability, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and the like.
Referring to fig. 1B, fig. 1B is a schematic diagram of another pet recipe generation system based on image recognition according to an embodiment of the present application. As shown in fig. 1B, the communication system may include a server and a plurality of electronic devices (only 3 are shown in fig. 1B). The server may be in wireless communication with the electronic device.
Fig. 1A and fig. 1B are only schematic diagrams, and do not limit an application scenario of the technical solution provided in the present application.
Referring to fig. 2, fig. 2 is a schematic flowchart of a pet recipe generation method based on image recognition according to an embodiment of the present application. The pet recipe generation method based on image recognition can be applied to electronic equipment, and as shown in fig. 2, the method comprises the following steps:
201. at least one image containing different parts of the target pet is acquired, each image including at least one part of the target pet.
Wherein the at least one image may comprise at least one of: the front photograph, the palm print photograph, the left and right side photographs, the tail photograph, the back photograph and the abdomen photograph of the target pet are not limited herein.
Wherein a portion of the target pet may include one of: the eyes, nose, ears, mouth, teeth, back, tail, abdomen, soles of feet, toenails of the target pet, without limitation thereto.
In the present application, the category of the target pet is not limited. For example, the target pet may be a cat, a dog, a pig, a duck, etc., without limitation.
Optionally, step 201 may include: displaying an image uploading interface, wherein the image uploading interface comprises an image uploading area and an image uploading control; and responding to the uploading operation of the image uploading control, and acquiring at least one image from the image uploading area.
The image uploading interface can further comprise an image uploading prompt area, the image uploading prompt area is used for displaying image uploading prompt information, and the image uploading prompt information is a preset requirement which needs to be met by an image uploaded by a user. The preset requirements may for example comprise at least one of the following: size, resolution, etc., without limitation.
202. And extracting the features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image.
The residual error network is one of the convolutional neural networks, is easy to optimize, can improve the accuracy rate by increasing the depth, and relieves the gradient disappearance problem caused by increasing the depth in the deep neural network because the residual error blocks in the residual error network are connected by jumping.
Optionally, the convolutional layer in the residual error network is used to extract multi-scale features in each image, where the feature maps reflected by the features of different scales are different. Specifically, the multi-scale features in each image comprise a lighter-level feature map and a deeper-level feature map of at least one part of the target pet, and the lighter-level feature map comprises at least one of the following items: the hair color information, the hair diameter information and the hair length information of at least one part of the target pet, and the deeper characteristic diagram comprises at least one of the following items: hair thinning degree information, dandruff remaining information, and oil and fat halation information of at least one part of the target pet.
And on one hand, the calculation cost can be reduced, and the problem of gradient disappearance caused by depth increase in the deep neural network can be solved. On the other hand, the multi-scale features in each image can be effectively obtained, and the richness and comprehensiveness of the extracted features are increased, so that the multi-task classification result is more accurate.
Optionally, before step 202, the present scheme may further include: and preprocessing each image to obtain each preprocessed image. It can be understood that, through a pre-established residual error network, feature extraction is performed on each image, and features extracted from convolutional layers of different depths are spliced to obtain a target feature map of each image, which may include: and extracting the features of each preprocessed image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image.
Optionally, the performing a preset process on each image to obtain each pre-processed image may include: and carrying out image correction on each image, and carrying out contrast enhancement processing after correction to obtain each preprocessed image. Wherein the image correction comprises an image tilt correction and/or an image brightness correction.
203. And splicing the target characteristic graphs of at least one image to be used as input data of a first network so as to carry out multi-task classification through the first network and obtain a plurality of characteristic information of the target pet.
The first network may be a multitask learning network, and the multitask learning network may be, for example, an MMOE network or other networks, which is not limited herein. The MMOE network is described below in conjunction with FIG. 3, taking the example of the MMOE network predicting two tasks simultaneously. As can be appreciated, two tasks represent that two characteristic information of the target pet are available. As shown in fig. 3, the MMOE network includes a plurality of expert networks, two gate networks and two task networks, i.e. the number of gate networks and the number of task networks are the same as the number of tasks, and the number of expert networks may be set according to actual requirements. In fig. 3, the gate network 1 and the task network 1 correspond to task 1, and the gate network 2 and the task network 2 correspond to task 2. Each expert network can process input features through the full-connection network so as to extract features from corresponding feature dimensions, and the number of layers of the full-connection network can be set according to actual requirements. Each gating network may employ a Softmax function to learn different combination patterns of multiple expert networks for respective tasks, i.e., to adaptively weight the output results of multiple expert networks. Each task network can adopt a neural network and output the prediction result of the corresponding task.
Illustratively, in fig. 3, after the target feature map of the at least one image after the stitching is input into the MMOE network, task 1 and task 2 may be output.
Wherein one feature information includes one of: the age of the target pet, the category of the target pet, the weight of the target pet, and the physiological status of the target pet. The physiological state of the target pet may include at least one of: body temperature, heart rate, blood pressure, pulse, blood oxygen content, etc., without limitation.
204. And generating a diet of the target pet according to the plurality of characteristic information of the target pet.
Optionally, step 204 may include: determining first food material information of the target pet according to a plurality of characteristic information of the target pet; obtaining weight factors corresponding to a plurality of characteristic information of the target pet, wherein the weight factors are the specific gravity of nutrient components contained in a recipe of the target pet; acquiring target food material information from the first food material information according to the weight factor; and generating a recipe of the target pet according to the information of the target food material.
Wherein the food material information may include at least one of: identification information of the at least one food material, nutritional ingredients of the at least one food material, weight of the at least one food material, and the like, without limitation. The identification information of the food material may be, for example, a name of the food material, a number of the food material, and the like, and is not limited herein. It can be understood that the identification information, the nutritional ingredients and the weight of the same food material are related. The first food material information may include: beef, beef nutrient components, beef weight, carrot nutrient components, carrot weight, pork chop nutrient components, pork chop weight and the like.
In the present application, the weight factor may be one or more, and is not limited herein. As can be understood, the weight factor is one, that is, one weight factor corresponding to a plurality of characteristic information; the weighting factors are multiple, that is, one piece of characteristic information corresponds to one weighting factor. The weighting factors corresponding to different feature information may be different or the same, and are not limited herein.
According to the technical scheme, the first food material information of the target pet is determined according to the characteristic information of the target pet, so that the target food material information can be obtained from the first food material information according to the specific gravity of the nutritional ingredients contained in the recipe of the target pet, the selected food material can better meet the requirement of the target pet, and the pet recipe with pertinence and accuracy can be generated.
Optionally, determining the first food material information of the target pet according to the plurality of characteristic information of the target pet may include: acquiring second food material information from a food material database according to a plurality of characteristic information of the target pet; acquiring food material information of other pets, wherein the similarity between the characteristic information of the other pets and the characteristic information of the target pet is higher than a preset threshold; and determining the first food material information according to the second food material information and the food material information of other pets.
In this application, the food material database includes at least one characteristic information and a corresponding relationship between food material information. Wherein, according to a plurality of characteristic information of the target pet, the second food material information is obtained from the food material database, which can be understood as: and acquiring second food material information according to the plurality of characteristic information and the corresponding relation of the target pet. When the second food material information is obtained according to the plurality of characteristic information and the corresponding relationship of the target pet, for example, the method may include: determining characteristic information matched with a plurality of characteristic information of the target pet from a database; and determining second food material information corresponding to the characteristic information matched with the plurality of characteristic information of the target pet according to the corresponding relation.
The preset threshold value can be configured in the electronic device in advance.
Determining the first food material information according to the second food material information and the food material information of other pets, wherein the determining of the first food material information comprises the following steps: and determining the union or the subset or the intersection of the union of the second food material information and the food material information of other pets as the first food material information.
According to the technical scheme, the second food material information is acquired from the food material database according to the plurality of characteristic information of the target pet, and the first food material information is determined according to the second food material information and the food material information of other pets, so that the food materials of the target pet are enriched by the food material information of other pets with the characteristic information similarity higher than the preset threshold value, and the problem that the food materials of the target pet are too single is avoided.
Optionally, generating a recipe of the target pet according to the target food material information includes: acquiring state information corresponding to a target pet; and generating a recipe of the target pet according to the state information and the target food material information.
Wherein the status information comprises at least one of: environmental information, geographical location information, time information. The environmental information may include at least one of: indoor environment, outdoor environment. The indoor environment may for example comprise at least one of: indoor temperature, indoor air humidity, indoor decibel information, etc. The outdoor environment may include, for example, at least one of: outdoor temperature, outdoor air humidity, outdoor decibel information, outdoor precipitation information, outdoor snowfall information, outdoor air pressure information, outdoor air quality, and the like. The time information may include, for example, season information and the like.
According to the technical scheme, the recipe of the target pet is generated according to the state information and the target food material information, so that the generated recipe of the pet is more refined, and the requirements of the pet are met better.
Optionally, after generating the recipe of the target pet according to the state information and the target food material information, the scheme may further include: obtaining a plurality of pet recipes from a recipe database; determining a pet recipe with the recipe similarity higher than a recipe similarity threshold value with a target pet from a plurality of pet recipes; and pushing the pet recipe with the recipe similarity higher than the recipe similarity threshold value with the target pet to the user. The method is characterized in that the method is used for pushing the pet recipe with the recipe similarity higher than the recipe similarity threshold value with the target pet by the user for feeding the pet, and the pet recipe is enriched.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a further pet recipe generation method based on image recognition according to an embodiment of the present application. The pet recipe generation method based on image recognition can be applied to electronic equipment, and as shown in fig. 4, the method comprises the following steps:
401. and displaying an image uploading interface, wherein the image uploading interface comprises an image uploading area and an image uploading control.
For step 401, reference may be made to the related description of step 201 in fig. 2, which is not repeated herein.
402. And responding to the uploading operation of the image uploading control, and acquiring at least one image from the image uploading area.
For step 402, reference may be made to the related description of step 201 in fig. 2, which is not repeated herein.
403. And extracting the features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image.
For step 403, reference may be made to the description related to step 202 in fig. 2, which is not repeated herein.
404. And splicing the target characteristic graphs of at least one image to be used as input data of a first network so as to carry out multi-task classification through the first network and obtain a plurality of characteristic information of the target pet.
For step 404, reference may be made to the related description of step 203 in fig. 2, which is not repeated herein.
405. And generating a diet of the target pet according to the plurality of characteristic information of the target pet.
For step 405, reference may be made to the related description of step 204 in fig. 2, which is not repeated herein.
According to the technical scheme, at least one image is acquired from the image uploading area, so that each image can be subjected to feature extraction through a pre-established residual error network, the features extracted from the convolutional layers with different depths are spliced to obtain the target feature map of each image, the richness and the comprehensiveness of the extracted features are increased, the target feature maps of at least one image are spliced to serve as input data of a first network, multi-task classification is performed through the first network, and multiple feature information of a target pet can be obtained more accurately, namely, the multi-task classification result is more accurate. Meanwhile, when the recipe of the target pet is generated according to the plurality of characteristic information of the target pet, the pertinence and the accuracy of the pet recipe are improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a pet recipe generation device based on image recognition according to an embodiment of the present application. As shown in fig. 5, a pet recipe generation apparatus 500 based on image recognition provided by an embodiment of the present application includes an obtaining module 501 and a processing module 502, where the obtaining module 501 is configured to obtain at least one image containing different parts of a target pet, and each image includes at least one part of the target pet; the processing module 502 is configured to perform feature extraction on each image through a pre-established residual error network, and splice features extracted from convolutional layers of different depths to obtain a target feature map of each image; the processing module 502 is further configured to splice the target feature map of at least one image, and use the spliced target feature map as input data of a first network to perform multi-task classification through the first network, so as to obtain multiple feature information of the target pet; the processing module 502 is further configured to generate a recipe of the target pet according to the plurality of characteristic information of the target pet.
Wherein the at least one image may comprise at least one of: the front photograph, the palm print photograph, the left and right side photographs, the tail photograph, the back photograph and the abdomen photograph of the target pet are not limited herein.
Wherein a portion of the target pet may include one of: the eyes, nose, ears, mouth, teeth, back, tail, abdomen, soles of feet, toenails of the target pet, without limitation thereto.
The residual error network is one of the convolutional neural networks, is easy to optimize, can improve the accuracy rate by increasing the depth, and relieves the gradient disappearance problem caused by increasing the depth in the deep neural network because the residual error blocks in the residual error network are connected by jumping.
Optionally, the convolutional layer in the residual error network is used to extract multi-scale features in each image, where the feature maps reflected by the features of different scales are different. Specifically, the multi-scale features in each image comprise a lighter-level feature map and a deeper-level feature map of at least one part of the target pet, and the lighter-level feature map comprises at least one of the following items: the hair color information, the hair diameter information and the hair length information of at least one part of the target pet, and the deeper characteristic diagram comprises at least one of the following items: hair thinning degree information, dandruff remaining information, and oil and fat halation information of at least one part of the target pet.
And on one hand, the calculation cost can be reduced, and the problem of gradient disappearance caused by depth increase in the deep neural network can be solved. On the other hand, the multi-scale features in each image can be effectively obtained, and the richness and comprehensiveness of the extracted features are increased, so that the multi-task classification result is more accurate.
Optionally, the processing module 502 is further configured to preprocess each image to obtain each preprocessed image, before performing feature extraction on each image through a pre-established residual error network, and splicing features extracted from convolutional layers of different depths to obtain a target feature map of each image. As can be understood, when the feature extraction is performed on each image through the pre-built residual error network and the features extracted from the convolution layers with different depths are spliced to obtain the target feature map of each image, the processing module 502 is specifically configured to perform the feature extraction on each pre-processed image through the pre-built residual error network and splice the features extracted from the convolution layers with different depths to obtain the target feature map of each image.
Optionally, when each image is preprocessed to obtain each preprocessed image, the processing module 502 is specifically configured to perform image correction on each image, and perform contrast enhancement processing after the image correction to obtain each preprocessed image. Wherein the image correction comprises an image tilt correction and/or an image brightness correction.
The first network may be a multitask learning network, and the multitask learning network may be, for example, an MMOE network or other networks, which is not limited herein.
Wherein one feature information includes one of: the age of the target pet, the category of the target pet, the weight of the target pet, and the physiological status of the target pet. The physiological state of the target pet may include at least one of: body temperature, heart rate, blood pressure, pulse, blood oxygen content, etc., without limitation.
It can be seen that, in the above technical scheme, by acquiring at least one image containing different parts of the target pet, feature extraction can be performed on each image through a pre-established residual error network, and features extracted from convolutional layers of different depths are spliced to obtain a target feature map of each image, so that richness and comprehensiveness of the extracted features are increased, and therefore, the target feature maps of at least one image are spliced to serve as input data of a first network, so that multitask classification is performed through the first network, and a plurality of feature information of the target pet can be obtained more accurately, namely, a result of the multitask classification is more accurate. Meanwhile, when the recipe of the target pet is generated according to the plurality of characteristic information of the target pet, the pertinence and the accuracy of the pet recipe are improved.
Optionally, when acquiring at least one image including different parts of the target pet, the acquiring module 501 is specifically configured to:
displaying an image uploading interface, wherein the image uploading interface comprises an image uploading area and an image uploading control;
and responding to the uploading operation of the image uploading control, and acquiring at least one image from the image uploading area.
The image uploading interface can further comprise an image uploading prompt area, the image uploading prompt area is used for displaying image uploading prompt information, and the image uploading prompt information is a preset requirement which needs to be met by an image uploaded by a user. The preset requirements may for example comprise at least one of the following: size, resolution, etc., without limitation.
Optionally, when the recipe of the target pet is generated according to the plurality of feature information of the target pet, the processing module 502 is specifically configured to:
determining first food material information of the target pet according to a plurality of characteristic information of the target pet;
obtaining weight factors corresponding to a plurality of characteristic information of the target pet, wherein the weight factors are the specific gravity of nutrient components contained in a recipe of the target pet;
acquiring target food material information from the first food material information according to the weight factor;
and generating a recipe of the target pet according to the information of the target food material.
Wherein the food material information may include at least one of: identification information of the at least one food material, nutritional ingredients of the at least one food material, weight of the at least one food material, and the like, without limitation. The identification information of the food material may be, for example, a name of the food material, a number of the food material, and the like, and is not limited herein. It can be understood that the identification information, the nutritional ingredients and the weight of the same food material are related. The first food material information may include: beef, beef nutrient components, beef weight, carrot nutrient components, carrot weight, pork chop nutrient components, pork chop weight and the like.
In the present application, the weight factor may be one or more, and is not limited herein. As can be understood, the weight factor is one, that is, one weight factor corresponding to a plurality of characteristic information; the weighting factors are multiple, that is, one piece of characteristic information corresponds to one weighting factor. The weighting factors corresponding to different feature information may be different or the same, and are not limited herein.
According to the technical scheme, the first food material information of the target pet is determined according to the characteristic information of the target pet, so that the target food material information can be obtained from the first food material information according to the specific gravity of the nutritional ingredients contained in the recipe of the target pet, the selected food material can better meet the requirement of the target pet, and the pet recipe with pertinence and accuracy can be generated.
Optionally, when the first food material information of the target pet is determined according to the multiple characteristic information of the target pet, the processing module 502 is specifically configured to:
acquiring second food material information from a food material database according to a plurality of characteristic information of the target pet;
acquiring food material information of other pets, wherein the similarity between the characteristic information of the other pets and the characteristic information of the target pet is higher than a preset threshold;
and determining the first food material information according to the second food material information and the food material information of other pets.
In the present application, the food material database includes at least one characteristic information and a corresponding relationship between food material information. Wherein, according to a plurality of characteristic information of the target pet, the second food material information is obtained from the food material database, which can be understood as: and acquiring second food material information according to the plurality of characteristic information and the corresponding relation of the target pet. When the second food material information is obtained according to the plurality of characteristic information and the corresponding relationship of the target pet, for example, the method may include: determining characteristic information matched with a plurality of characteristic information of the target pet from a database; and determining second food material information corresponding to the characteristic information matched with the plurality of characteristic information of the target pet according to the corresponding relation.
The preset threshold value can be configured in the electronic device in advance.
When the first food material information is determined according to the second food material information and the food material information of other pets, the processing module 502 is specifically configured to determine a union or a subset or an intersection of the union of the second food material information and the food material information of other pets as the first food material information.
According to the technical scheme, the second food material information is obtained from the food material database according to the plurality of characteristic information of the target pet, and the first food material information is determined according to the second food material information and the food material information of other pets, so that the food materials of the target pet are enriched by the food material information of other pets of which the characteristic information similarity is higher than the preset threshold value, and the problem that the food materials of the target pet are too single is avoided.
Optionally, when the recipe of the target pet is generated according to the target food material information, the processing module 502 is specifically configured to:
acquiring state information corresponding to the target pet;
and generating a recipe of the target pet according to the state information and the target food material information.
Wherein the status information comprises at least one of: environmental information, geographical location information, time information. The environmental information may include at least one of: indoor environment, outdoor environment. The indoor environment may for example comprise at least one of: indoor temperature, indoor air humidity, indoor decibel information, etc. The outdoor environment may include, for example, at least one of: outdoor temperature, outdoor air humidity, outdoor decibel information, outdoor precipitation information, outdoor snowfall information, outdoor air pressure information, outdoor air quality, and the like. The time information may include, for example, season information and the like.
According to the technical scheme, the recipe of the target pet is generated according to the state information and the target food material information, so that the generated recipe of the pet is more refined, and the requirements of the pet are met better.
Optionally, after generating the recipe of the target pet according to the state information and the target food material information, the obtaining module 501 is further configured to obtain a plurality of pet recipes from a recipe database; a processing module 502, further configured to determine a pet recipe from the plurality of pet recipes, where a similarity of the recipe to the target pet is higher than a threshold of the similarity of the recipe; the processing module 502 is further configured to push the pet recipe with the recipe similarity higher than the recipe similarity threshold with the target pet to the user. The method is characterized in that the method is used for pushing the pet recipe with the recipe similarity higher than the recipe similarity threshold value with the target pet by the user for feeding the pet, and the pet recipe is enriched.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
The application embodiment provides an electronic device for pet recipe generation based on image recognition, which comprises a processor, a memory, a communication interface and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor to execute instructions comprising the steps of any one of the pet recipe generation methods based on image recognition. As shown in fig. 6, an electronic device of a hardware operating environment according to an embodiment of the present application may include:
a processor 601, such as a CPU.
The memory 602 may alternatively be a high speed RAM memory or a stable memory such as a disk memory.
A communication interface 603 for implementing connection communication between the processor 601 and the memory 602.
Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 6 is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 6, the memory 602 may include an operating system, a network communication module, and one or more programs. An operating system is a program that manages and controls the server hardware and software resources, supporting the execution of one or more programs. The network communication module is used for communication among the components in the memory 602 and with other hardware and software in the electronic device.
In the electronic device shown in fig. 6, the processor 601 is configured to execute one or more programs in the memory 602, and implement the following steps:
acquiring at least one image containing different parts of a target pet, wherein each image comprises at least one part of the target pet;
extracting the features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image;
splicing the target characteristic graphs of at least one image to be used as input data of a first network so as to carry out multi-task classification through the first network and obtain a plurality of characteristic information of a target pet;
and generating a diet of the target pet according to the plurality of characteristic information of the target pet.
For specific implementation of the electronic device related to the present application, reference may be made to the above embodiments of the pet recipe generation method based on image recognition, which are not described herein again.
The present application also provides a computer readable storage medium for storing a computer program, the stored computer program being executable by a processor to perform the steps of:
acquiring at least one image containing different parts of a target pet, wherein each image comprises at least one part of the target pet;
extracting the features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image;
splicing the target characteristic graphs of at least one image to be used as input data of a first network so as to carry out multi-task classification through the first network and obtain a plurality of characteristic information of a target pet;
and generating a recipe of the target pet according to the plurality of characteristic information of the target pet.
For specific implementation of the computer-readable storage medium related to the present application, reference may be made to the above embodiments of the pet recipe generation method based on image recognition, which are not described herein again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that the acts and modules involved are not necessarily required for this application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A pet recipe generation method based on image recognition is characterized by comprising the following steps:
acquiring at least one image containing different parts of a target pet, wherein each image comprises at least one part of the target pet;
extracting features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image;
splicing the target characteristic graphs of the at least one image to serve as input data of a first network, and performing multi-task classification through the first network to obtain a plurality of characteristic information of the target pet;
and generating a recipe of the target pet according to the plurality of characteristic information of the target pet.
2. The method of claim 1, wherein said obtaining at least one image containing different parts of the target pet comprises:
displaying an image uploading interface, wherein the image uploading interface comprises an image uploading area and an image uploading control;
and responding to the uploading operation of the image uploading control, and acquiring the at least one image from the image uploading area.
3. The method of claim 1, wherein generating the recipe for the target pet based on the plurality of characteristic information of the target pet comprises:
determining first food material information of the target pet according to the plurality of characteristic information of the target pet;
obtaining weighting factors corresponding to a plurality of characteristic information of the target pet, wherein the weighting factors are the specific gravity of nutritional ingredients contained in a recipe of the target pet;
acquiring target food material information from the first food material information according to the weight factor;
and generating a recipe of the target pet according to the target food material information.
4. The method of claim 3, wherein the determining the first food material information of the target pet according to the plurality of characteristic information of the target pet comprises:
acquiring second food material information from a food material database according to the plurality of characteristic information of the target pet;
acquiring food material information of other pets, wherein the similarity between the characteristic information of the other pets and the characteristic information of the target pet is higher than a preset threshold;
and determining the first food material information according to the second food material information and the food material information of other pets.
5. The method of claim 3, wherein the generating the recipe for the target pet according to the target food material information comprises:
acquiring state information corresponding to the target pet;
and generating a recipe of the target pet according to the state information and the target food material information.
6. The method of claim 5, wherein the status information comprises at least one of: environmental information, geographical location information, time information.
7. The method according to any one of claims 1-6, wherein a feature information comprises one of: the age of the target pet, the category of the target pet, the weight of the target pet, and the physiological status of the target pet.
8. A pet recipe generating device based on image recognition is characterized by comprising an acquisition module and a processing module,
the acquisition module is used for acquiring at least one image containing different parts of a target pet, and each image comprises at least one part of the target pet;
the processing module is used for extracting features of each image through a pre-established residual error network, and splicing the features extracted from the convolution layers with different depths to obtain a target feature map of each image;
the processing module is further used for splicing the target characteristic graphs of the at least one image to serve as input data of a first network, so that multitask classification can be carried out through the first network, and a plurality of characteristic information of the target pet can be obtained;
the processing module is further used for generating a recipe of the target pet according to the plurality of characteristic information of the target pet.
9. An electronic device for pet recipe generation based on image recognition, comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and are generated for execution by the processor to perform the instructions of the steps of any of the methods of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program, which is executed by the processor, to implement the method of any of claims 1-7.
CN202111582666.9A 2021-12-22 2021-12-22 Pet recipe generation method based on image recognition and related device Pending CN114464294A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111582666.9A CN114464294A (en) 2021-12-22 2021-12-22 Pet recipe generation method based on image recognition and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111582666.9A CN114464294A (en) 2021-12-22 2021-12-22 Pet recipe generation method based on image recognition and related device

Publications (1)

Publication Number Publication Date
CN114464294A true CN114464294A (en) 2022-05-10

Family

ID=81405541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111582666.9A Pending CN114464294A (en) 2021-12-22 2021-12-22 Pet recipe generation method based on image recognition and related device

Country Status (1)

Country Link
CN (1) CN114464294A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110634552A (en) * 2019-09-19 2019-12-31 青岛海尔科技有限公司 Recipe pushing method and device based on Internet of things operating system
CN111191066A (en) * 2019-12-23 2020-05-22 厦门快商通科技股份有限公司 Image recognition-based pet identity recognition method and device
CN111758113A (en) * 2018-01-16 2020-10-09 哈比有限公司 Method and system for a pet health platform
CN112167074A (en) * 2020-10-14 2021-01-05 北京科技大学 Automatic feeding device based on pet face recognition
CN112579873A (en) * 2019-09-27 2021-03-30 北京安云世纪科技有限公司 Cooking recipe recommendation method and device, storage medium and electronic equipment
CN112786154A (en) * 2021-01-18 2021-05-11 京东方科技集团股份有限公司 Recipe recommendation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111758113A (en) * 2018-01-16 2020-10-09 哈比有限公司 Method and system for a pet health platform
CN110634552A (en) * 2019-09-19 2019-12-31 青岛海尔科技有限公司 Recipe pushing method and device based on Internet of things operating system
CN112579873A (en) * 2019-09-27 2021-03-30 北京安云世纪科技有限公司 Cooking recipe recommendation method and device, storage medium and electronic equipment
CN111191066A (en) * 2019-12-23 2020-05-22 厦门快商通科技股份有限公司 Image recognition-based pet identity recognition method and device
CN112167074A (en) * 2020-10-14 2021-01-05 北京科技大学 Automatic feeding device based on pet face recognition
CN112786154A (en) * 2021-01-18 2021-05-11 京东方科技集团股份有限公司 Recipe recommendation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210241109A1 (en) Method for training image classification model, image processing method, and apparatuses
CN109658938B (en) Method, device and equipment for matching voice and text and computer readable medium
CN111291841B (en) Image recognition model training method and device, computer equipment and storage medium
US20160379091A1 (en) Training a classifier algorithm used for automatically generating tags to be applied to images
US20200065706A1 (en) Method for verifying training data, training system, and computer program product
CN111125422A (en) Image classification method and device, electronic equipment and storage medium
CN111950596A (en) Training method for neural network and related equipment
CN111160016B (en) Semantic recognition method and device, computer readable storage medium and computer equipment
CN111507134A (en) Human-shaped posture detection method and device, computer equipment and storage medium
CN110580278A (en) personalized search method, system, equipment and storage medium according to user portrait
CN111553419A (en) Image identification method, device, equipment and readable storage medium
US20230334893A1 (en) Method for optimizing human body posture recognition model, device and computer-readable storage medium
CN113408570A (en) Image category identification method and device based on model distillation, storage medium and terminal
CN108492301A (en) A kind of Scene Segmentation, terminal and storage medium
CN110503162A (en) A kind of media information prevalence degree prediction technique, device and equipment
CN107193941A (en) Story generation method and device based on picture content
CN111506596A (en) Information retrieval method, information retrieval device, computer equipment and storage medium
CN108596094B (en) Character style detection system, method, terminal and medium
KR102280307B1 (en) METHOD AND APPARATUS FOR CURATING Companion animal feed and snack, and system using the same
CN114464294A (en) Pet recipe generation method based on image recognition and related device
CN112132026A (en) Animal identification method and device
CN116719904A (en) Information query method, device, equipment and storage medium based on image-text combination
CN109033078B (en) The recognition methods of sentence classification and device, storage medium, processor
US20220327361A1 (en) Method for Training Joint Model, Object Information Processing Method, Apparatus, and System
CN114821771A (en) Clipping object determining method in image, video clipping method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination