CN111210397B - Image processing method, image display method and device and electronic equipment - Google Patents

Image processing method, image display method and device and electronic equipment Download PDF

Info

Publication number
CN111210397B
CN111210397B CN202010026964.9A CN202010026964A CN111210397B CN 111210397 B CN111210397 B CN 111210397B CN 202010026964 A CN202010026964 A CN 202010026964A CN 111210397 B CN111210397 B CN 111210397B
Authority
CN
China
Prior art keywords
image
dish
processed
images
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010026964.9A
Other languages
Chinese (zh)
Other versions
CN111210397A (en
Inventor
蒋尉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koubei Shanghai Information Technology Co Ltd
Original Assignee
Koubei Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koubei Shanghai Information Technology Co Ltd filed Critical Koubei Shanghai Information Technology Co Ltd
Priority to CN202010026964.9A priority Critical patent/CN111210397B/en
Publication of CN111210397A publication Critical patent/CN111210397A/en
Application granted granted Critical
Publication of CN111210397B publication Critical patent/CN111210397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The embodiment of the application provides an image processing method, an image display device and electronic equipment. Wherein the image processing method comprises: establishing a relation between an object and an image; in response to the plurality of objects being selected, processing the images of the plurality of selected objects to obtain a plurality of processed images based on the relationship of the objects to the images; and synthesizing the plurality of processed images and the image template obtained in advance to obtain and display a synthesized image. The method comprises the steps of establishing a relation between an object and an image; then, processing the image of the selected object to obtain a plurality of processed images based on the relationship between the object and the image; and synthesizing the plurality of processed images and the image template obtained in advance to obtain and display a synthesized image. The synthetic image obtained by the method of the embodiment can meet the requirement that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination are solved.

Description

Image processing method, image display method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
With the rapid development of science and technology, the living standard of people is continuously improved. With this, there is an increasing demand for combinations or packages of articles. When displaying an image corresponding to an article combination or an article package, images of a plurality of individual articles need to be combined to form a combined image or a package image, so as to display the images. A combination image or package image means that all of the individual items that make up the combination or package are on the combination image or package image. For example, in a dining scene, there is often a scene in which a plurality of single item information needs to be generated into a package image, and there is also a need for displaying the package image.
In order to solve the above problems, the prior art mainly performs image combination by manually cutting out images. Specifically, the individual images are cut first, and then the cut individual images are combined together to form a package image. However, this prior art approach is time consuming and labor intensive, requiring a long time. If the composite image requires a large number of individual images, the time required to composite the image may be longer. Although some software can automatically combine a plurality of images into one image, the software-combined image has the following drawbacks. First, images synthesized by software are generally simply stitched, and the synthesized images cannot meet the requirements of image combination. On the other hand, in many cases, images synthesized by software overlap or are mutually blocked by single images, and the requirement for forming an image combination cannot be met.
Disclosure of Invention
The embodiment of the application provides an image processing method, which aims to solve the problems that the existing image processing method is time-consuming and labor-consuming, and the requirement for forming image combination is difficult to meet.
In a first aspect, an embodiment of the present application provides an image processing method, including: establishing a corresponding relation between the object and the image; in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image.
Optionally, a display area for placing the processed images is arranged in the image template; and the display area and the attribute information of the selected object have a corresponding relation.
Optionally, the method further includes: obtaining attribute information of the selected object; and obtaining the image template according to the attribute information of the selected object.
Optionally, the attribute information of the selected object includes category information to which each selected object belongs; the obtaining the image template according to the attribute information of the selected object includes: and obtaining an image template provided with the category information to which each selected object belongs.
Optionally, the method further includes: and obtaining the image template according to the number of the plurality of processed images.
Optionally, the obtaining the image template according to the number of the plurality of processed images includes: and obtaining the image template according to the number of the plurality of processed images and the corresponding relation between the number of the pre-obtained images and the image template.
Optionally, the synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images includes: determining a display area of each processed image in the image template according to the corresponding relation between the attribute information of the selected object and the processed images and the corresponding relation between the display area and the attribute information of the selected object; and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
Optionally, the determining, according to the correspondence between the plurality of processed images, the attribute information of the selected object and the plurality of processed images, and the correspondence between the display area and the attribute information of the selected object, a display area of each processed image in the image template includes: obtaining attribute information of the selected object corresponding to each processed image according to the corresponding relation between the attribute information of the selected object and the processed images of the processed images; and determining the display area of each processed image in the image template according to the attribute information of the selected object corresponding to each processed image and the corresponding relation between the display area and the attribute information of the selected object.
Optionally, the method further includes: for each processed image, generating an image component, wherein the image component comprises the processed image and attribute information of an object in the processed image; the synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images includes: matching the attribute information in each image assembly with the attribute information of the selected object in the display area, and determining the display area of each processed image in the image template; and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
Optionally, the image component further includes identification information of an object in the processed image; the method further comprises the following steps: and binding the identification information of the object in the processed image in each image component with the position information of the processed image in the composite image.
Optionally, if the display areas of the processed images in the image template are the same, randomly allocating display positions to the processed images in the same display area.
Optionally, the, in response to the plurality of objects being selected, processing the images of the plurality of selected objects based on the correspondence between the objects and the images to obtain a plurality of processed images includes: obtaining a plurality of selected objects provided by a client; obtaining images of the selected objects based on the corresponding relationship between the objects and the images; and processing the images of the selected objects to obtain a plurality of processed images.
Optionally, the method further includes: providing the composite image to the client.
Optionally, the method further includes: obtaining a request message sent by the client for requesting to obtain the composite image; the providing the composite image to the client includes: providing the composite image to the client for the request message.
Optionally, the object is a dish object; the image of the object is a dish image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: a corresponding relation establishing unit for establishing a corresponding relation between the object and the image; a processed image obtaining unit configured to, in response to a plurality of objects being selected, process images of the plurality of selected objects based on correspondence between the objects and the images to obtain a plurality of processed images; a synthesizing unit configured to synthesize the plurality of processed images and an image template obtained in advance, and obtain a synthesized image including the plurality of processed images; and the display unit is used for displaying the composite image.
In a third aspect, an embodiment of the present application provides an image processing method, including: sending a plurality of original images to a server, wherein each original image comprises a dish object; and acquiring a target image returned by the server, wherein the target image comprises the dish objects in each original image.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including: the server comprises an original image sending unit, a server side and a server side, wherein the original image sending unit is used for sending a plurality of original images to the server side, and each original image comprises a dish object; and the target image obtaining unit is used for obtaining a target image returned by the server, and the target image comprises the dish objects in each original image.
In a fifth aspect, an embodiment of the present application provides an image displaying method, including: displaying a first interface for selecting dish information; displaying the selected dish information on a second interface aiming at the triggering operation for selecting the dish information, wherein the selected dish information comprises an original image corresponding to the selected dish information; and displaying a target image comprising a plurality of dish objects on a third interface aiming at a trigger operation for confirming the selected dish information, wherein the plurality of dish objects are dish objects in the original image.
In a sixth aspect, an embodiment of the present application provides an image display apparatus, including: the first interface display unit is used for displaying a first interface used for selecting the dish information; the second interface display unit is used for displaying the selected dish information on a second interface aiming at the triggering operation for selecting the dish information, and the selected dish information comprises an original image corresponding to the selected dish information; and the third interface display unit is used for displaying a target image comprising a plurality of dish objects on a third interface aiming at the trigger operation for confirming the selected dish information, wherein the plurality of dish objects are dish objects in the original image.
In a seventh aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing a computer program for execution by the processor for performing a method of image processing, the method comprising the steps of: establishing a corresponding relation between the object and the image; in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image.
In an eighth aspect, an embodiment of the present application provides a computer storage medium storing a computer program, which is executed by a processor, and performs a method of image processing, where the method includes: establishing a corresponding relation between the object and the image; in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image.
Compared with the prior art, the embodiment of the application has the following advantages:
an embodiment of the present application provides an image processing method, including: establishing a corresponding relation between the object and the image; in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The method comprises the steps of firstly establishing a corresponding relation between an object and an image; then, responding to the selection of a plurality of objects, and processing the images of the selected objects based on the corresponding relation between the objects and the images to obtain a plurality of processed images; then, synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The synthetic image obtained by the image processing method of the embodiment can meet the requirement that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination can be solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1-a is a schematic diagram of an embodiment of an application scenario provided in the present application.
FIG. 1-B is a schematic diagram of a prior art dish package.
FIG. 1-C is a schematic diagram of a menu image generated by selecting a menu object using an image processing method.
Fig. 1D is a schematic diagram of a process of generating an image of a dish package according to the first embodiment.
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present application.
Fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present application.
Fig. 3 is a flowchart of an image processing method according to a third embodiment of the present application.
Fig. 4 is a schematic diagram of an image processing apparatus according to a fourth embodiment of the present application.
Fig. 5 is a flowchart of an image displaying method according to a fifth embodiment of the present application.
Fig. 6 is a schematic view of an image display apparatus according to a sixth embodiment of the present application.
Fig. 7 is a schematic diagram of an image processing electronic device according to a seventh embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
Some embodiments provided by the application can be applied to interactive scenes among clients, servers and users. As shown in fig. 1-a, which is a schematic diagram of an embodiment of an application scenario provided in the present application. And providing a plurality of selected objects at the client and sending a request for synthesizing images of the selected objects to the server. The server side firstly obtains images of the selected objects, and then processes the images of the selected objects to obtain a plurality of processed images. Then, the plurality of processed images are synthesized with the image template obtained in advance to obtain a synthesized image including the plurality of processed images. And finally, sending the composite image to a client for display. Of course, all the steps of the above process may be performed at the client.
Specifically, the above-described scene process of interaction between the client, the server, and the user is as follows. First, the user selects a plurality of objects according to an instruction specification of a first interface of the client, and obtains images of the selected objects based on the plurality of selected objects. Before obtaining the image of the selected object, the corresponding relation between the object and the image needs to be established in advance. After the plurality of selected objects are obtained, images of the plurality of selected objects can be obtained according to the plurality of selected objects and the corresponding relation between the objects and the images.
After the user selects the plurality of objects in the first interface, the plurality of objects selected by the user are displayed on the second interface of the client, and the user can add the selected objects or delete the currently selected objects on the second interface according to the plurality of objects displayed on the second interface. Of course, after deleting the plurality of objects, it is necessary to ensure that the number of the finally selected plurality of objects is at least two, so as to ensure that the image corresponding to the finally selected object of the user can be synthesized by using the image processing method of the embodiment.
Since the method of this embodiment can be applied to a generation scene of a dish package image, the image processing method of this embodiment is explained by using generation of a dish package image in this embodiment, so as to better understand the image processing method of this embodiment. Of course, it is understood that the image processing method of the present embodiment may also be used for image synthesis or image combination of other scenes.
After the selected objects are determined, images of the selected objects are sent to the server. After receiving the images of the multiple selected objects and the synthesis request sent by the client, the server may process the images of the multiple selected objects to obtain multiple processed images. Then, the plurality of processed images are synthesized with the image template obtained in advance to obtain a synthesized image including the plurality of processed images. For example, when the images of the selected objects are all single dish images with white backgrounds, the edge portions of dishes containing dishes in the dish images can be recognized, then the background portions outside the edge portions of the dish images are removed, and the image portions including the dishes and the dishes are reserved as the processed images.
Meanwhile, the attribute information of the objects in the processed image can be acquired according to the selected objects. For example, when the selected object is a dish of "dry pot chicken leg", the attribute information of the object in the processed image can be acquired, and the hot dish is the attribute information of the dish of "dry pot chicken leg". In the process of obtaining the processed image and the attribute information of the object in the processed image, the identification information of the object in the processed image can also be obtained. For example, the above process recognizes the dish name "dry pan chicken leg" as the identification information of the object in the processed image.
And after the attribute information of the processed image and the object in the processed image is obtained, generating an image assembly from the processed image and the attribute information of the object in the processed image. For example, after the processed image of the dish of the dry pan chicken leg is obtained, the processed image of the dish of the dry pan chicken leg is combined with the attribute information of the hot dish to form an image component. And then, according to the processed image and the attribute information of the selected object, the server side selects an image template corresponding to the processed image and the attribute information of the selected object from a pre-stored image template library. The image templates in the image template library are provided with display areas for arranging a plurality of processed images; and the display area and the attribute information of the selected object have a corresponding relation. After the image template is selected, the processed images are sequentially arranged at the corresponding positions of the image template according to the display area arranged in the image template and used for arranging the processed images and the attribute information of the selected object and the corresponding relation between the display area in the image template and the attribute information of the selected object. After the plurality of processed images are placed in the image template, their identification information may be bound to the positions of the processed images. Finally, a composite image including a plurality of processed images is obtained. For example, the attribute information of the selected dish objects is only hot dishes, and the number of the processed dish images is three. A dish image template containing three hot dishes is selected and the corresponding processed dish image is positioned at the corresponding position of the display area of the dish image template according to the attribute information in the combined image component, thereby obtaining a synthesized dish image.
Identification information of the object in the processed image may also be added while forming the composite image. For this situation, the identification information of the object in the processed image in each image component may be bound to the position information of the processed image in the composite image, so that when a user clicks a certain processed image in the composite image, the identification information of the object in the processed image is displayed, and the related information of the processed image can be automatically identified. For example, the dish name "dry pot chicken leg" may be bound at the "dry pot chicken leg" dish image in the formed composite dish image. In this scenario, the composite dish image may be taken as a dish package image.
After the composite image is generated, the server side sends the composite image to the client side, and a user can check the composite image through a third interface of the client side.
The composite image obtained by the image processing method can solve the problems that the existing image processing method is time-consuming and labor-consuming and is difficult to meet the requirements of image combination formation. It should be noted that the application scenario is only one embodiment of the application scenario, and this embodiment of the application scenario is provided to facilitate understanding of the image processing method of the present application, and is not used to limit the image processing method of the present application.
The application provides an image processing method, an image display method and device, electronic equipment and a computer storage medium. The following are specific examples.
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present application. The method comprises the following steps.
Step S101: and establishing a corresponding relation between the object and the image.
Establishing the corresponding relationship between the object and the image is the first step of this embodiment. The present embodiment is to synthesize images of a selected object based on the object selected by a user. Therefore, after the object selected by the user is obtained, an image of the selected object is necessarily obtained. At this time, it is necessary to obtain the image of the selected object based on the selected object and the correspondence between the object and the image.
When synthesizing the dish package image, a relationship between the dish object and the dish image is first established, where the dish object and the dish image refer to a dish object of a single dish and a dish image of a single dish, respectively. This step can be used to provide a menu image corresponding to the menu object after the user selects the menu object.
Step S102: in response to the plurality of objects being selected, processing the images of the plurality of selected objects to obtain a plurality of processed images based on the correspondence between the objects and the images.
In this step, after selecting an object, the user obtains images of a plurality of selected objects based on the correspondence between the objects and the images of step S101. And processing the images of the plurality of selected objects to obtain a plurality of processed images.
Specifically, if the image processing method is executed at the server, the client is used to provide a plurality of selected objects. The client can provide a plurality of selected objects, and the main point is that the user selects the objects on the client side.
Schematic diagram of user selecting object at client referring to fig. 1-C, in this embodiment, the user may be a merchant user. First, a merchant user selects all dish objects for composing a dish package image on a first interface of a client. And then, displaying all the dish objects selected by the merchant user on a second interface of the client. At which the merchant user may submit all of the dish objects that have been selected to compose the dish package image. All the dish objects submitted by the merchant user that have been selected to compose the dish package image are used in the following process. Specifically, the following process includes the server receiving all selected dish objects and automatically synthesizing images of all selected dish objects on a dish package image, wherein the dish package image contains all dish objects that have been selected by a merchant user. And after the server synthesizes the dish package images, returning the synthesized dish package images to a third interface of the client to be displayed to the user.
The existing dish set generation method mainly adopts a text synthesis set method, and most of the formed dish sets are in a text set form. Even if some dish packages adopt an image form, there is a problem that a plurality of dish images cannot be combined onto one dish package image at the same time. As shown in fig. 1-B, which is a schematic diagram of a related art dish set, the related art dish set in this form can only view one dish image, and if the images of other dishes in the dish set are desired to be viewed, the next dish image needs to be manually turned over, or a certain time is needed to wait until the next dish image can be automatically displayed on the current page. Obviously, the display form of the dish package is not beneficial to simultaneously viewing the images of all dishes in the dish package.
Aiming at the problems, the images of a plurality of dish objects can be synthesized into one image, or the plurality of dish objects are placed together for shooting, so that the purpose of simultaneously checking all the dish objects in the dish package is achieved. For the first mode, the images of the dish objects need to be manually scratched and then combined, which is time-consuming and labor-consuming; for the second mode, the cost of placing a plurality of dish objects together for photographing is high, and the implementation of the dish objects by a merchant user is not facilitated.
By adopting the image processing mode of the embodiment, a merchant user can upload a plurality of selected dish objects to the server after selecting the dish objects, and the server automatically synthesizes a dish package image containing a plurality of selected dishes, so that the effects of time saving, labor saving and cost saving can be achieved. As shown in fig. 1-C, which is a schematic diagram of generating a menu image of a menu by selecting a menu object using an image processing method. Specifically, first, a merchant user selects all the dish objects for composing the dish package image on a first interface of the client. And then, displaying all the dish objects selected by the merchant user on a second interface of the client. At which the merchant user may submit all of the dish objects that have been selected to compose the dish package image.
Of course, before the dish package image is synthesized, it is necessary to obtain dish images of a plurality of selected dish objects. For example, when the selected dish object is the dry pan chicken leg, the dish image of the dry pan chicken leg needs to be obtained. Please refer to fig. 1-D, which includes dish images of "dry pan chicken leg". And after the vegetable images of the selected vegetable objects are obtained, carrying out matting processing on the obtained vegetable images of the selected vegetable objects to obtain a plurality of processed vegetable images.
Step S103: and synthesizing the plurality of processed images and the image template obtained in advance to obtain a synthesized image comprising the plurality of processed images.
After the plurality of processed dish images are obtained, the plurality of processed images and the image template obtained in advance are synthesized to obtain a synthesized image including the plurality of processed images.
Please refer to fig. 1-D for a process of generating a menu image of a menu in this embodiment, and fig. 1-D are schematic diagrams of the process of generating the menu image of the menu in this embodiment. In fig. 1-D, after obtaining a plurality of processed dish images, the server synthesizes the plurality of processed dish images and a dish package image template obtained in advance, and obtains a dish package image including the plurality of processed dish images, that is, the dish package image includes a plurality of selected dish objects.
In the above process, it has been explained that a plurality of dish images are obtained first, and a plurality of processed dish images are obtained after the plurality of dish images are processed. Specifically, matting refers to cropping the image in the way of matting shown in FIGS. 1-D. The operation of cutting each menu image is described in detail as follows: all the dish images are compared in size, and all the dish images can be cut according to the largest size in the sizes of the dish object-containing areas of all the dish images. It is assumed that the image sizes of all the dish images uploaded are the same and all the dish images are dish images with a white background. The area of all the dish images containing the dish object may be cut out with reference to the size of the dish plate in the manner described with reference to fig. 1-D. In the matting process, the selected dish images are processed, and the processed dish images contain dish objects and the minimum irrelevant image areas, so that in the subsequently obtained composite images, the irrelevant image areas contained in each dish image interfere the display of other dish images in the composite images to the minimum extent possible.
It will be appreciated that in other implementations, the selected menu image may be processed in other sizes, as long as it is ensured that the menu portion remains intact in the processed image.
At the same time, an image template also needs to be obtained. Wherein, the image template is provided with a display area for arranging a plurality of processed images; and the display area and the attribute information of the selected object have a corresponding relation.
As one of the ways of obtaining the image template, firstly, obtaining the attribute information of the selected object; and then, obtaining the image template according to the attribute information of the selected object. Specifically, the attribute information of the selected object includes category information to which each selected object belongs. Therefore, obtaining the image template according to the attribute information of the selected object may be obtaining an image template provided with category information to which each of the selected objects belongs.
And simultaneously acquiring the image of the selected object and attribute information of the selected object. For example, referring to fig. 1-D, attribute information of the selected dish object may be obtained in the manner as shown in fig. 1-D. Such as: the dish object is a hot dish, or a cold dish, or a staple food, or a beverage, which all belong to the attribute information of the dish object.
Specifically, obtaining attribute information of the dish object may be in a manner as described below. First, a selected dish object is obtained. And after the selected dish object is obtained, obtaining the attribute information of the dish object according to the selected dish object and the corresponding relation between the dish object and the attribute information of the dish object. The attribute information of each dish object comprises category information to which each dish object belongs. The attribute information of each dish object may specifically be dish type information to which the dish object belongs, that is, whether the dish object belongs to a hot dish, a cold dish, a staple food, or a beverage.
For example, when the selected dish object is a "dry pan chicken leg", the dish name of the selected dish object is obtained as the "dry pan chicken leg", and then the corresponding relationship between the dish name and the dish category is searched in the dish library. The dry pan chicken leg is used as a search word, and the dry pan chicken leg is searched to be a hot dish. The attribute information of the selected dish object can be obtained through the method. In a similar way, attribute information of all other selected dish objects can also be obtained. The attribute information of hot dishes, cold dishes, staple food, beverages and the like of other selected dish objects can be obtained.
The attribute information of the dish object includes not only what kind of dish attribute information that the dish belongs to hot dishes, cold dishes, staple foods, beverages and the like, but also the price of the dish, and as a way of obtaining the price of the dish, the price of the dish attached to the dish object selected by the merchant user can be directly obtained. The price of the dish can be obtained according to the corresponding price of the dish object in the dish library.
In addition, it is also possible to obtain an image template according to the number of the plurality of processed images. Specifically, the image template is obtained according to the number of the plurality of processed images, as follows: and obtaining the image template according to the number of the plurality of processed images and the corresponding relation between the number of the images obtained in advance and the image template. For example, when the selected dish objects are hot dishes 1, 2, 3, 4, 5, 1, rice and beverages, respectively, a menu image template containing a hot dish area, a cold dish area, a staple food area and a beverage area and capable of containing the eight selected dish objects is screened in a pre-stored menu template library. The display space of the hot dish area can display the five processed images of the hot dishes, the display space of the cold dish area can display the one processed image of the cold dishes, the display space of the staple food area can display the one processed image of the staple food, and the display space of the beverage area can display the one processed image of the beverage.
After obtaining the image template, synthesizing the plurality of processed images and the image template obtained in advance to obtain a synthesized image comprising the plurality of processed images, and according to the following mode: firstly, determining a display area of each processed image in an image template according to the corresponding relation between the attribute information of the selected object and the plurality of processed images and the corresponding relation between the display area and the attribute information of the selected object; and then, according to the determined display area of each processed image in the image template, obtaining a composite image comprising each processed image.
Specifically, the determining of the display area of each processed image in the image template according to the correspondence between the attribute information of the selected object and the plurality of processed images and the correspondence between the display area and the attribute information of the selected object may be performed by first obtaining the attribute information of the selected object corresponding to each processed image according to the correspondence between the attribute information of the selected object and the plurality of processed images; and then, determining the display area of each processed image in the image template according to the attribute information of the selected object corresponding to each processed image and the corresponding relation between the display area and the attribute information of the selected object.
Further, to facilitate identification of objects in the composite image, an image component is generated for each processed image prior to the composite image, the image component including the processed image and attribute information of the objects in the processed image. Therefore, the plurality of processed images and the pre-obtained image template are synthesized, and the synthesized image comprising the plurality of processed images can also be obtained by firstly matching the attribute information in each image assembly with the attribute information of the selected object in the display area and determining the display area of each processed image in the image template; and then generating a composite image comprising each processed image according to the determined display area of each processed image in the image template.
Meanwhile, the image assemblies further comprise identification information of the objects in the processed images, and in order to further facilitate the identification of the objects in the composite image, the identification information of the objects in the processed images in each image assembly is bound with the position information of the processed images in the composite image.
Specifically, after the selected dish object is obtained, the name of the selected dish can be obtained. For example, a name of a dish corresponding to the dish object in the dish library may be identified, and in this embodiment, the name of the dish may be used as identification information of the dish object in the dish package image. After identifying the attribute information of the dish object and the name of the dish, combining each processed dish image with the attribute information and the name of the dish to form a dish image component, as shown in fig. 1-D, which shows a forming process of the dish image component, that is: the method comprises the steps of firstly conducting image matting on a vegetable image, simultaneously obtaining attribute information and a vegetable name of a vegetable object, and then corresponding each processed vegetable image with the attribute information and the vegetable name to form each vegetable image component.
After obtaining a plurality of dish image components and a dish package image template, obtaining a composite image including each processed dish image according to each dish image component and the dish package image template obtained in advance. Specifically, according to each dish image component and the dish package image template obtained in advance, a composite image including each processed dish image is obtained as follows.
Firstly, matching attribute information in each dish image assembly with attribute information of a selected dish object in a dish package image template display area, and determining the display area of each processed dish image in the dish package image template; and then generating a dish package image comprising each processed dish image according to the determined display area of each processed dish image in the dish package image template. Since the acquired dish package image template includes each partition, as shown in fig. 1-D, each partition is labeled with a "hot dish area, cold dish area, staple food area, and beverage area" display area corresponding to the attribute information of the selected dish object, and the corresponding relationship between the attribute information of each selected dish object and the partition of the display area can be established in advance.
And then, generating a dish package image according to the display area of each processed dish image in the dish package image template. For example, referring to fig. 1-D, since a corresponding relationship has been established in advance between the attribute information of each selected dish object and the partition of the display area, the five processed hot dish images may be all filled in the hot dish area of the dish package image template, one processed cold dish image may be filled in the cold dish area of the dish package image template, one processed staple food image may be filled in the staple food area of the dish package image template, and one processed beverage image may be filled in the beverage area of the dish package image template, so as to generate the dish package image, according to the manner shown in fig. 1-D, referring to the display area of the dish package image template.
Meanwhile, the dish image assembly further comprises identification information of the dish object in the processed dish image, and in order to further facilitate identification of the dish object in the composite image, the identification information of the dish object in the processed dish image in each dish image assembly is bound with the position information of the processed dish image in the composite dish package image.
And finally, providing the menu image of the dishes to the client. That is, after the menu package image is generated, the menu package image is sent to a third interface of the client for display, such as the third interface shown in fig. 1-C.
In the above description, it is mentioned that the method of the present embodiment involves determining the display area of each processed image in the image template, and it should be noted that, if the display areas of a plurality of processed images in the image template are the same, the display positions are randomly allocated to the plurality of processed images in the same display area.
Step S104: and displaying the composite image.
After the image is synthesized in step S103, the synthesized image may be provided to the client, and the client may also display the synthesized image. Firstly, obtaining a request message which is sent by the client and requests to obtain the composite image; thereafter, the composite image is provided to the client for the request message.
The method comprises the steps of firstly establishing a corresponding relation between an object and an image; then, responding to the selection of a plurality of objects, and processing the images of the selected objects based on the corresponding relation between the objects and the images to obtain a plurality of processed images; then, synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The synthetic image obtained by the image processing method of the embodiment can meet the requirement that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination can be solved.
In the first embodiment described above, an image processing method is provided, and accordingly, the present application provides an image processing apparatus. Fig. 2 is a schematic diagram of an image processing apparatus according to a second embodiment of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
A second embodiment of the present application provides an image processing apparatus including:
a correspondence relationship establishing unit 201 for establishing a correspondence relationship between the object and the image;
a processed image obtaining unit 202 configured to, in response to selection of a plurality of objects, process images of the plurality of selected objects based on correspondence between the objects and the images to obtain a plurality of processed images;
a synthesizing unit 203 for synthesizing the plurality of processed images and an image template obtained in advance to obtain a synthesized image including the plurality of processed images;
a display unit 204 for displaying the composite image.
Optionally, a display area for placing the processed images is arranged in the image template; and the display area and the attribute information of the selected object have a corresponding relation.
Optionally, the method further includes: the attribute information obtaining unit is used for obtaining the attribute information of the selected object; and the image template obtaining unit is used for obtaining the image template according to the attribute information of the selected object.
Optionally, the attribute information of the selected object includes category information to which each selected object belongs; the image template obtaining unit is specifically configured to: and obtaining an image template provided with the category information to which each selected object belongs.
Optionally, the image template obtaining unit is further configured to: and obtaining the image template according to the number of the plurality of processed images.
Optionally, the image template obtaining unit is further configured to: and obtaining the image template according to the number of the plurality of processed images and the corresponding relation between the number of the pre-obtained images and the image template.
Optionally, the synthesis unit is specifically configured to: determining a display area of each processed image in the image template according to the corresponding relation between the attribute information of the selected object and the processed images and the corresponding relation between the display area and the attribute information of the selected object; and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
Optionally, the synthesis unit is specifically configured to: obtaining attribute information of the selected object corresponding to each processed image according to the corresponding relation between the attribute information of the selected object and the processed images of the processed images; and determining the display area of each processed image in the image template according to the attribute information of the selected object corresponding to each processed image and the corresponding relation between the display area and the attribute information of the selected object.
Optionally, the system further comprises an image component generating unit; the image component generation unit is specifically configured to: for each processed image, generating an image component, wherein the image component comprises the processed image and attribute information of an object in the processed image; the synthesis unit is specifically configured to: matching the attribute information in each image assembly with the attribute information of the selected object in the display area, and determining the display area of each processed image in the image template; and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
Optionally, the image component further includes identification information of an object in the processed image; the apparatus further comprises a binding unit; the binding unit is specifically configured to: and binding the identification information of the object in the processed image in each image component with the position information of the processed image in the composite image.
Optionally, if the display areas of the processed images in the image template are the same, randomly allocating display positions to the processed images in the same display area.
Optionally, the processed image obtaining unit is specifically configured to: obtaining a plurality of selected objects provided by a client; obtaining images of the selected objects based on the corresponding relationship between the objects and the images; and processing the images of the selected objects to obtain a plurality of processed images.
Optionally, the system further comprises a providing unit; the providing unit is specifically configured to: providing the composite image to the client.
Optionally, the system further comprises a request message obtaining unit; the request message obtaining unit is specifically configured to: obtaining a request message sent by the client for requesting to obtain the composite image; the providing unit is specifically configured to: providing the composite image to the client for the request message.
Optionally, the object is a dish object; the image of the object is a dish image.
The method comprises the steps of firstly establishing a corresponding relation between an object and an image; then, responding to the selection of a plurality of objects, and processing the images of the selected objects based on the corresponding relation between the objects and the images to obtain a plurality of processed images; then, synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The synthetic image obtained by the image processing device of the embodiment can meet the requirement that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination can be solved.
The first embodiment provides an image processing method, and the third embodiment provides another image processing method. Fig. 3 is a flowchart illustrating an embodiment of an image processing method according to a third embodiment of the present application. The method comprises the following steps. Since the third embodiment is partially similar to the image processing method of the first method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the first method embodiment, and the image processing method embodiment described below is only illustrative.
The method comprises the following steps:
step S301: and sending a plurality of original images to the server, wherein each original image comprises a dish object.
Step S302: and acquiring a target image returned by the server, wherein the target image comprises the dish objects in each original image.
According to the method and the device, the target image returned by the server is obtained by sending the plurality of original images to the server. Thereby enabling the target image obtained by the image processing method of the present embodiment to satisfy the composition of a plurality of target subject objects on the combined image. The method can solve the problems that the image synthesis method in the prior art is time-consuming and labor-consuming, and the requirement for forming image combination is difficult to meet.
In the third embodiment described above, an image processing method is provided, and accordingly, the present application provides an image processing apparatus. Fig. 4 is a schematic diagram of an image processing apparatus according to a fourth embodiment of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
A fourth embodiment of the present application provides an image processing apparatus comprising:
an original image sending unit 401, configured to send a plurality of original images to the server, where each original image includes a dish object.
And the target image obtaining unit 402 is configured to obtain a target image returned by the server, where the target image includes the dish object in each original image.
According to the method and the device, the target image returned by the server is obtained by sending the plurality of original images to the server. Thereby enabling the target image obtained with the image processing apparatus of the present embodiment to satisfy the composition of a plurality of target subject objects on a combined image. The image synthesis device can solve the problems that the image synthesis device in the prior art wastes time and labor and is difficult to meet the requirement of forming image combination.
The first embodiment provides an image processing method, and the fifth embodiment correspondingly provides an image display method. Fig. 5 is a flowchart illustrating an embodiment of an image displaying method according to a fifth embodiment of the present application. The method comprises the following steps. Since the fifth embodiment is partially similar to the description part related to the display in the image processing method of the first method embodiment, the description is relatively simple, and the related part can be referred to the partial description of the first method embodiment.
The method comprises the following steps:
step S501: a first interface for selecting the dish information is presented.
Step S502: and displaying the selected dish information on a second interface aiming at the triggering operation for selecting the dish information, wherein the selected dish information comprises an original image corresponding to the selected dish information.
Step S503: and displaying a target image comprising a plurality of dish objects on a third interface aiming at the trigger operation for confirming the selected dish information, wherein the plurality of dish objects are dish objects in the original image.
According to the method and the device for displaying the menu items, the selected menu item information is displayed on the second interface through displaying the first interface used for selecting the menu item information and aiming at the triggering operation used for selecting the menu item information, and the target image comprising a plurality of menu item objects is displayed on the third interface aiming at the triggering operation used for confirming the selected menu item information. Therefore, the dish target image obtained by the image display method of the embodiment can meet the requirement that a plurality of dish objects are displayed on the dish target image.
In the fifth embodiment, an image displaying method is provided, and accordingly, the present application provides an image displaying apparatus. Fig. 6 is a schematic view of an image displaying apparatus according to a sixth embodiment of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
A sixth embodiment of the present application provides an image display apparatus, including:
the first interface display unit 601 is configured to display a first interface for selecting the dish information.
A second interface displaying unit 602, configured to display, on a second interface, selected dish information according to a trigger operation for selecting the dish information, where the selected dish information includes an original image corresponding to the selected dish information.
A third interface displaying unit 603, configured to display, on a third interface, a target image including a plurality of dish objects for a trigger operation for confirming the selected dish information, where the plurality of dish objects are dish objects in the original image.
According to the method and the device for displaying the menu items, the selected menu item information is displayed on the second interface through displaying the first interface used for selecting the menu item information and aiming at the triggering operation used for selecting the menu item information, and the target image comprising a plurality of menu item objects is displayed on the third interface aiming at the triggering operation used for confirming the selected menu item information. Therefore, the dish target image obtained by the image display device of the embodiment can meet the requirement that a plurality of dish objects are displayed on the dish target image.
A first embodiment of the present application provides an image processing method, and a seventh embodiment of the present application provides an electronic device corresponding to the method of the first embodiment.
As shown in fig. 7, it shows a schematic diagram of the image processing electronic device provided by the present embodiment.
The present embodiment also provides an electronic device, including:
a processor 701;
a memory 702 for storing a computer program to be executed by a processor for performing a method of image processing, the method comprising the steps of:
establishing a corresponding relation between the object and the image;
in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images;
synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images;
and displaying the composite image.
The method comprises the steps of firstly establishing a corresponding relation between an object and an image; then, responding to the selection of a plurality of objects, and processing the images of the selected objects based on the corresponding relation between the objects and the images to obtain a plurality of processed images; then, synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The synthetic image obtained by the embodiment can meet the condition that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination are solved.
A first embodiment of the present application provides an image processing method, and an eighth embodiment of the present application provides a computer storage medium corresponding to the method of the first embodiment.
The present embodiment provides a computer storage medium storing a computer program executed by a processor, the computer program performing a method of image processing, the method comprising the steps of:
establishing a corresponding relation between the object and the image;
in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images;
synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images;
and displaying the composite image.
The method comprises the steps of firstly establishing a corresponding relation between an object and an image; then, responding to the selection of a plurality of objects, and processing the images of the selected objects based on the corresponding relation between the objects and the images to obtain a plurality of processed images; then, synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; and displaying the composite image. The synthetic image obtained by the embodiment can meet the condition that the selected object is synthesized on the synthetic image, so that the problems that the image synthesis method in the prior art is time-consuming and labor-consuming and is difficult to meet the requirement of forming image combination are solved.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer-readable medium does not include non-transitory computer-readable storage media (non-transitory computer readable storage media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (21)

1. An image processing method, comprising:
establishing a corresponding relation between an object and an image, wherein the object is a dish object, and the corresponding relation comprises the corresponding relation between the dish object and the dish image;
in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; wherein the processing the images of the selected objects to obtain processed images comprises: carrying out cutout processing on the vegetable images of the selected vegetable objects to obtain a plurality of processed vegetable images; each processed dish image of the plurality of processed dish images includes a dish object therein;
synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; the composite image is a dish set image formed by a plurality of selected dish objects; the image template is obtained based on the attribute information of the selected objects; the attribute information of the plurality of selected objects comprises category information to which each selected object belongs;
displaying the composite image;
before synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images, the method further includes: and combining the processed dish image, the name information of the dish object and the category information of the dish object to generate a dish image component of each dish.
2. The method according to claim 1, wherein a display area for placing the plurality of processed images is disposed in the image template; and the display area and the attribute information of the selected object have a corresponding relation.
3. The method of claim 2, further comprising:
obtaining attribute information of the selected object;
and obtaining the image template according to the attribute information of the selected object.
4. The method of claim 3,
the obtaining the image template according to the attribute information of the selected object includes:
and obtaining an image template provided with the category information to which each selected object belongs.
5. The method of claim 1, further comprising: and obtaining the image template according to the number of the plurality of processed images.
6. The method of claim 5, wherein obtaining the image template according to the number of the plurality of processed images comprises: and obtaining the image template according to the number of the plurality of processed images and the corresponding relation between the number of the pre-obtained images and the image template.
7. The method of claim 2, wherein the synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image comprising the plurality of processed images comprises:
determining a display area of each processed image in the image template according to the corresponding relation between the attribute information of the selected object and the processed images and the corresponding relation between the display area and the attribute information of the selected object;
and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
8. The method of claim 7, wherein the determining the display area of each processed image in the image template according to the correspondence between the plurality of processed images, the attribute information of the selected object and the plurality of processed images, and the correspondence between the display area and the attribute information of the selected object comprises:
obtaining attribute information of the selected object corresponding to each processed image according to the corresponding relation between the attribute information of the selected object and the processed images of the processed images;
and determining the display area of each processed image in the image template according to the attribute information of the selected object corresponding to each processed image and the corresponding relation between the display area and the attribute information of the selected object.
9. The method of claim 2,
the synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images includes:
matching the attribute information in each image assembly with the attribute information of the selected object in the display area, and determining the display area of each processed image in the image template;
and generating the composite image comprising each processed image according to the determined display area of each processed image in the image template.
10. The method of claim 9, wherein the image component further comprises identification information of objects in the processed image;
the method further comprises the following steps:
and binding the identification information of the object in the processed image in each image component with the position information of the processed image in the composite image.
11. The method according to claim 7 or 9, wherein if the display areas of the plurality of processed images in the image template are the same, the display positions of the plurality of processed images are randomly assigned in the same display area.
12. The method of claim 1, wherein in response to a plurality of objects being selected, processing images of the plurality of selected objects based on correspondence between the objects and the images to obtain a plurality of processed images comprises:
obtaining a plurality of selected objects provided by a client;
obtaining images of the selected objects based on the corresponding relationship between the objects and the images;
and processing the images of the selected objects to obtain a plurality of processed images.
13. The method of claim 12, further comprising: providing the composite image to the client.
14. The method of claim 13, further comprising: obtaining a request message sent by the client for requesting to obtain the composite image;
the providing the composite image to the client includes: providing the composite image to the client for the request message.
15. An image processing apparatus characterized by comprising:
the device comprises a corresponding relation establishing unit, a display unit and a display unit, wherein the corresponding relation establishing unit is used for establishing a corresponding relation between an object and an image, the object is a dish object, and the corresponding relation comprises a corresponding relation between the dish object and a dish image;
a processed image obtaining unit configured to, in response to a plurality of objects being selected, process images of the plurality of selected objects based on correspondence between the objects and the images to obtain a plurality of processed images; wherein the processed image obtaining unit is specifically configured to: carrying out cutout processing on the vegetable images of the selected vegetable objects to obtain a plurality of processed vegetable images; each processed dish image of the plurality of processed dish images includes a dish object therein;
a synthesizing unit configured to synthesize the plurality of processed images and an image template obtained in advance, and obtain a synthesized image including the plurality of processed images; the composite image is a dish set image formed by a plurality of selected dish objects; the image template is obtained based on the attribute information of the selected objects; the attribute information of the plurality of selected objects comprises category information to which each selected object belongs;
a display unit for displaying the composite image;
the device also comprises an image component generating unit; the image component generation unit is specifically configured to: and combining each processed dish image, name information of the dish object and category information to which the dish object belongs to generate a dish image component of each dish before synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image comprising the plurality of processed images.
16. An image processing method, comprising:
sending a plurality of original images to a server, wherein each original image comprises a dish object;
obtaining a target image returned by a server, wherein the target image comprises a dish object in each original image; the target image is a dish package image formed by dish objects in a plurality of original images;
the target image is an image synthesized based on a plurality of processed original images and a pre-obtained image template; the image template is obtained based on attribute information of the dish object, the processed original images are obtained by performing cutout processing on the original images respectively, and each processed image in the processed original images comprises the dish object; the attribute information of the dish object comprises category information to which the dish object belongs;
wherein the target image is an image synthesized based on a dish image component of a plurality of dishes and a pre-obtained image template, and the dish image component of each of the dish image components of the plurality of dishes includes: the processed dish image, name information of the dish object and category information to which the dish object belongs.
17. An image processing apparatus characterized by comprising:
the server comprises an original image sending unit, a server side and a server side, wherein the original image sending unit is used for sending a plurality of original images to the server side, and each original image comprises a dish object;
the server comprises a target image obtaining unit, a server side and a server side, wherein the target image obtaining unit is used for obtaining a target image returned by the server side, and the target image comprises dish objects in each original image; the target image is a dish package image formed by dish objects in a plurality of original images;
the target image is an image synthesized based on a plurality of processed original images and a pre-obtained image template; the image template is obtained based on attribute information of the dish object, the processed original images are obtained by performing cutout processing on the original images respectively, and each processed image in the processed original images comprises the dish object; the attribute information of the dish object comprises category information to which the dish object belongs;
wherein the target image is an image synthesized based on a dish image component of a plurality of dishes and a pre-obtained image template, and the dish image component of each of the dish image components of the plurality of dishes includes: the processed dish image, name information of the dish object and category information to which the dish object belongs.
18. An image presentation method, comprising:
displaying a first interface for selecting dish information;
displaying the selected dish information on a second interface aiming at the triggering operation for selecting the dish information, wherein the selected dish information comprises an original image corresponding to the selected dish information;
displaying a target image comprising a plurality of dish objects on a third interface aiming at a trigger operation for confirming the selected dish information, wherein the plurality of dish objects are dish objects in the original image; the target image is a dish package image consisting of a plurality of selected dish objects;
the target image is an image synthesized based on a plurality of processed original images and a pre-obtained image template; the image template is obtained based on attribute information of the plurality of dish objects, the processed plurality of original images are obtained by performing cutout processing on the plurality of original images respectively, and each processed image in the processed plurality of original images comprises a dish object; the attribute information of the dish object comprises category information to which the dish object belongs;
wherein the target image is an image synthesized based on a dish image component of a plurality of dishes and a pre-obtained image template, and the dish image component of each of the dish image components of the plurality of dishes includes: the processed dish image, name information of the dish object and category information to which the dish object belongs.
19. An image display apparatus, comprising:
the first interface display unit is used for displaying a first interface used for selecting the dish information;
the second interface display unit is used for displaying the selected dish information on a second interface aiming at the triggering operation for selecting the dish information, and the selected dish information comprises an original image corresponding to the selected dish information;
a third interface display unit, configured to display, on a third interface, a target image including a plurality of dish objects for a trigger operation for confirming the selected dish information, where the plurality of dish objects are dish objects in the original image; the target image is a dish package image consisting of a plurality of selected dish objects;
the target image is an image synthesized based on a plurality of processed original images and a pre-obtained image template; the image template is obtained based on attribute information of the plurality of dish objects, the processed plurality of original images are obtained by performing cutout processing on the plurality of original images respectively, and each processed image in the processed plurality of original images comprises a dish object; the attribute information of the dish object comprises category information to which the dish object belongs;
wherein the target image is an image synthesized based on a dish image component of a plurality of dishes and a pre-obtained image template, and the dish image component of each of the dish image components of the plurality of dishes includes: the processed dish image, name information of the dish object and category information to which the dish object belongs.
20. An electronic device, comprising:
a processor;
a memory for storing a computer program for execution by the processor for performing a method of image processing, the method comprising the steps of:
establishing a corresponding relation between an object and an image, wherein the object is a dish object, and the corresponding relation comprises the corresponding relation between the dish object and the dish image;
in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; wherein the processing the images of the selected objects to obtain processed images comprises: carrying out cutout processing on the vegetable images of the selected vegetable objects to obtain a plurality of processed vegetable images; each processed dish image of the plurality of processed dish images includes a dish object therein;
synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; the composite image is a dish set image formed by a plurality of selected dish objects; the image template is obtained based on the attribute information of the selected objects; the attribute information of the plurality of selected objects comprises category information to which each selected object belongs;
displaying the composite image;
before synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images, the method further includes: and combining the processed dish image, the name information of the dish object and the category information of the dish object to generate a dish image component of each dish.
21. A computer storage medium storing a computer program to be executed by a processor to perform a method of image processing, the method comprising:
establishing a corresponding relation between an object and an image, wherein the object is a dish object, and the corresponding relation comprises the corresponding relation between the dish object and the dish image;
in response to a plurality of objects being selected, processing images of the plurality of selected objects to obtain a plurality of processed images based on correspondence between the objects and the images; wherein the processing the images of the selected objects to obtain processed images comprises: carrying out cutout processing on the vegetable images of the selected vegetable objects to obtain a plurality of processed vegetable images; each processed dish image of the plurality of processed dish images includes a dish object therein;
synthesizing the plurality of processed images and a pre-obtained image template to obtain a synthesized image comprising the plurality of processed images; the composite image is a dish set image formed by a plurality of selected dish objects; the image template is obtained based on the attribute information of the selected objects; the attribute information of the plurality of selected objects comprises category information to which each selected object belongs;
displaying the composite image;
before synthesizing the plurality of processed images and the pre-obtained image template to obtain a synthesized image including the plurality of processed images, the method further includes: and combining the processed dish image, the name information of the dish object and the category information of the dish object to generate a dish image component of each dish.
CN202010026964.9A 2020-01-10 2020-01-10 Image processing method, image display method and device and electronic equipment Active CN111210397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010026964.9A CN111210397B (en) 2020-01-10 2020-01-10 Image processing method, image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010026964.9A CN111210397B (en) 2020-01-10 2020-01-10 Image processing method, image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210397A CN111210397A (en) 2020-05-29
CN111210397B true CN111210397B (en) 2021-09-10

Family

ID=70790016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010026964.9A Active CN111210397B (en) 2020-01-10 2020-01-10 Image processing method, image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210397B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112004034A (en) * 2020-09-04 2020-11-27 北京字节跳动网络技术有限公司 Method and device for close photographing, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2450279A (en) * 2006-04-05 2008-12-17 Curiosk Global Solutions Inc System for providing information about alcoholic beverages
CN106652005A (en) * 2016-09-27 2017-05-10 成都盈同乐家信息技术有限公司 Virtual home scene 3D designer and virtual home scene 3D design system
CN107729851A (en) * 2017-10-24 2018-02-23 湖北工业大学 A kind of Chinese meal dinner party table top is set a table intelligent scoring method and system
CN108121787A (en) * 2017-12-19 2018-06-05 深圳市赛亿科技开发有限公司 Wardrobe management method and system
CN109214955A (en) * 2018-08-17 2019-01-15 口口相传(北京)网络技术有限公司 The generation method and device of food product set meal
CN109284712A (en) * 2018-09-20 2019-01-29 浙江口碑网络技术有限公司 The configuration method and device of merchandise news
CN110060052A (en) * 2019-01-25 2019-07-26 阿里巴巴集团控股有限公司 Interactive approach and device, electronic equipment based on electronic certificate

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130311310A1 (en) * 2012-05-15 2013-11-21 Gene Zell Restaurant communication system and method utilizing digital menus
US10339498B2 (en) * 2016-08-10 2019-07-02 Label Insight Information management system for product ingredients
CN108196751A (en) * 2018-01-08 2018-06-22 深圳天珑无线科技有限公司 Update method, terminal and the computer readable storage medium of group chat head portrait

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2450279A (en) * 2006-04-05 2008-12-17 Curiosk Global Solutions Inc System for providing information about alcoholic beverages
CN106652005A (en) * 2016-09-27 2017-05-10 成都盈同乐家信息技术有限公司 Virtual home scene 3D designer and virtual home scene 3D design system
CN107729851A (en) * 2017-10-24 2018-02-23 湖北工业大学 A kind of Chinese meal dinner party table top is set a table intelligent scoring method and system
CN108121787A (en) * 2017-12-19 2018-06-05 深圳市赛亿科技开发有限公司 Wardrobe management method and system
CN109214955A (en) * 2018-08-17 2019-01-15 口口相传(北京)网络技术有限公司 The generation method and device of food product set meal
CN109284712A (en) * 2018-09-20 2019-01-29 浙江口碑网络技术有限公司 The configuration method and device of merchandise news
CN110060052A (en) * 2019-01-25 2019-07-26 阿里巴巴集团控股有限公司 Interactive approach and device, electronic equipment based on electronic certificate

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
An Artificial Bee Colony Algorithm Based Optimization Method for Service Network Customization;Shaopeng Wang 等;《2013 International Conference on Service Sciences (ICSS)》;20130527;全文 *
杭州群核信息技术有限公司.酷家乐设计师app.《豌豆荚》.2019,第1-6页. *
网络餐饮营销模式分析及优化建议;方梦雅 等;《现代商贸工业》;20170331(第4期);全文 *
美图秀秀app;厦门美图网科技有限公司;《豌豆荚》;20200103;第1-3页 *
酷家乐设计师app;杭州群核信息技术有限公司;《豌豆荚》;20191218;第1-6页 *

Also Published As

Publication number Publication date
CN111210397A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
US20170289643A1 (en) Method of displaying advertising during a video pause
US20150324477A1 (en) Two-Dimensional Code Processing Method and Terminal
US10163048B2 (en) Method and device for page synchronization
CN110347946B (en) Page display method and device, computer equipment and storage medium
US11593981B2 (en) Method for processing a screenshot image, electronic device and computer storage medium
CN106547769B (en) DOI display method and device
KR102151964B1 (en) Product photograph service providing method for product detailed information content
CN107071550B (en) Video data sharing method and device
CN112822560B (en) Virtual gift giving method, system, computer device and storage medium
CN111210397B (en) Image processing method, image display method and device and electronic equipment
CN105630792A (en) Information display method and device as well as information push method and device
CN111866587A (en) Short video generation method and device
CN114154000A (en) Multimedia resource publishing method and device
CN114072760A (en) Cutting method, distribution method, medium, server, system
CN110717790A (en) Method and equipment for viewing media files
US20170337286A1 (en) Search result optimizing method, search engine, an apparatus and non-volatile computer storage medium
CN113722626A (en) Material concurrent request processing method, computing device and storage medium
CN109446915B (en) Dish information generation method and device and electronic equipment
CN113298589A (en) Commodity information processing method and device, and information acquisition method and device
CN110557684B (en) Information processing method, system, electronic device, and computer-readable medium
US9578258B2 (en) Method and apparatus for dynamic presentation of composite media
US20150055936A1 (en) Method and apparatus for dynamic presentation of composite media
CN115017345A (en) Multimedia content processing method, device, computing equipment and storage medium
MX2014002932A (en) Systems and methods for providing and accessing visual product representations of a project.
CN113747223B (en) Video comment method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant