CN113869833A - Method and device for sending guide information and electronic equipment - Google Patents

Method and device for sending guide information and electronic equipment Download PDF

Info

Publication number
CN113869833A
CN113869833A CN202111188233.5A CN202111188233A CN113869833A CN 113869833 A CN113869833 A CN 113869833A CN 202111188233 A CN202111188233 A CN 202111188233A CN 113869833 A CN113869833 A CN 113869833A
Authority
CN
China
Prior art keywords
image
target
delivery
information
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111188233.5A
Other languages
Chinese (zh)
Inventor
张鹏
沈国斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lazas Network Technology Shanghai Co Ltd
Original Assignee
Lazas Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lazas Network Technology Shanghai Co Ltd filed Critical Lazas Network Technology Shanghai Co Ltd
Priority to CN202111188233.5A priority Critical patent/CN113869833A/en
Publication of CN113869833A publication Critical patent/CN113869833A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a method for sending guide information, which comprises the following steps: acquiring a first image sent by a delivery end, wherein the first image is an image used for indicating the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object; acquiring a second image sent by a user side, wherein the second image is a scene image of a placed position, and the user side corresponds to a user receiving a target distribution object; according to the first image and the second image, obtaining guiding information, wherein the guiding information is used for guiding the user to determine a target distribution object at a placed position of a target distribution address in the user side; and sending the guiding information to the user side. The method obtains the scene graph corresponding to the target distribution address according to the image sent by the distribution end, and then compares the scene graph with the image sent by the user to obtain the guide information for guiding the user to determine the target object, so that the efficiency of the user to determine the target object is improved.

Description

Method and device for sending guide information and electronic equipment
Technical Field
The present application relates to the field of computers, and in particular, to a method for sending guidance information, a method for determining a target delivery object, and a method for sending an image. And also relates to a guidance information transmitting apparatus, a target distribution object specifying apparatus, an image transmitting apparatus, and an electronic device.
Background
In daily life, delivery service scenes such as logistics delivery, express delivery, take-away delivery and the like are visible everywhere. For various reasons, the deliverer can only place the delivery object in a predetermined storage area so that the user can pick up the delivery object. However, the large order amount results in a large accumulation of the delivery objects in the storage area, and thus it takes a long time for the user to find the corresponding delivery object.
In order to solve the above problems, the prior art is to provide a storage rack or an intelligent cabinet in a storage area, but in any case, the cost is increased, and meanwhile, in peak hours, the storage space of the storage rack and the intelligent cabinet is limited, or the above problems cannot be solved. Therefore, how to improve the efficiency of searching for the distribution object by the user becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a guide information sending method, a target distribution object determining method and an image sending method. Meanwhile, the guiding information sending device, the target distribution object determining device, the image sending device and the electronic equipment are provided to solve the problems in the prior art.
The application provides a method for sending guide information, which comprises the following steps: the method comprises the steps of obtaining a first image sent by a delivery end, wherein the first image is an image used for representing the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object; obtaining a second image sent by a user side, wherein the second image is the scene image of the placed position, and the user side corresponds to a user receiving the target distribution object; obtaining guiding information according to the first image and the second image, wherein the guiding information is used for guiding the user to determine the target distribution object at the placed position of the target distribution address in the user side; and sending the guiding information to the user side.
Optionally, the obtaining the first image sent by the delivery end includes: the first image is determined from a plurality of images which are sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or the first image is obtained from a video file which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or one image which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address is determined as the first image.
Optionally, the method further includes: obtaining a scene video file or a plurality of scene images which are sent by the user side and used for representing the target distribution address; the obtaining of the second image sent by the user side includes: the second image is obtained from the scene video file or a plurality of scene images.
Optionally, the method further includes: obtaining the information of the placement position of the target delivery object at the target delivery address according to the first image; the obtaining the second image from the scene video file or a plurality of scene images comprises: identifying location information of the target delivery address represented by each image of the plurality of images of the scene video file or each image of the plurality of scene images; obtaining an image whose position information matches the placed position information from the plurality of frame images or the plurality of scene images as the second image.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the method further includes: obtaining identification information of an order corresponding to the first image sent by the distribution end; and obtaining the identification information of the order corresponding to the second image sent by the user side.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a third image sent by other delivery ends, wherein the third image is an image used for representing other delivery objects at the placed position, and the other delivery ends correspond to delivery resources for delivering the other delivery objects; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the method further includes: obtaining a third image sent by other delivery ends, wherein the third image is an image used for representing other delivery objects at the placed position, and the other delivery ends correspond to delivery resources for delivering the other delivery objects; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a fourth image sent by the user side, wherein the fourth image is a scene image of other positions of the target distribution address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information; and sending the guiding information for guiding the user to determine the placed position in the user side to the user side.
Optionally, if the number of the to-be-picked delivery objects with the highest feature matching degree with the target delivery object is multiple, the to-be-picked delivery object with the highest feature matching degree with the target delivery object is taken as the delivery object with the highest matching degree, and the method further includes: identifying first background information or a first reference object in the first image; obtaining a first relative position relationship between the target distribution object in the first image and first background information or the first reference object in the first image; identifying second background information or a second reference object in the second image, wherein the second background information or the second reference object is the same as the first background information or the first reference object in the first image; respectively obtaining a second relative position relation between each distribution object to be picked with the highest matching degree and the second background information or the second reference object; and taking the delivery object to be picked up with the highest matching degree in a second relative position relation which is the same as the first relative position relation as the target delivery object identified from the second image.
Optionally, the method further includes: and if the first image has a plurality of delivery objects, identifying a target delivery object from the plurality of delivery objects in the first image according to indication information which is sent by the delivery end and used for indicating the target delivery object in the plurality of delivery objects.
Optionally, the method further includes: and if the first image has a plurality of distribution objects, identifying a target distribution object from the plurality of distribution objects in the first image according to merchant description information or target distribution object description information in an order corresponding to the first image.
The application also provides a method for determining a target distribution object, which is applied to a user side, and the method comprises the following steps: sending a second image to a server, wherein the second image is a scene image of a target distribution object at a placed position of a target distribution address, and the user side corresponds to a user receiving the target distribution object; acquiring guide information returned by the server aiming at the second image; the guidance information is presented in the second image, and the guidance information is guidance information for determining the target delivery object at the placement position of the target delivery address.
Optionally, the obtaining of the guidance information returned by the server for the second image includes: and acquiring guiding information which is returned by the server and is acquired according to a first image and the second image, wherein the first image is an image which is sent by a delivery end and used for representing the placement position of the target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the method further includes: sending a fourth image to a server, where the fourth image is a scene image of another location of the target distribution address, the client corresponds to a user receiving the target distribution object, identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image, the first image is an image sent by a distribution end and used for indicating a location where the target distribution object is placed at the target distribution address, and the distribution end corresponds to a distribution resource distributing the target distribution object; acquiring guide information returned by the server aiming at the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
The application also provides a method for determining a target distribution object, which is applied to a user side, and the method comprises the following steps: the method comprises the steps of obtaining a first image sent by a server, wherein the first image is an image which is collected by a distribution end and used for representing the placement position of a target distribution object at a target distribution address, and the distribution end corresponds to distribution resources for distributing the target distribution object; acquiring a second image, wherein the second image is a scene image of a target distribution object at a placement position of a target distribution address, and the user side corresponds to a user receiving the target distribution object; acquiring guide information according to the first image and the second image; the guidance information is presented in the second image, and the guidance information is guidance information for determining the target delivery object at the placement position of the target delivery address.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image of the placed position; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a fourth image, wherein the fourth image is a scene image of other positions of the target delivery address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; acquiring guide information according to the first image and the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
Optionally, the obtaining guidance information according to the first image and the fourth image includes: obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; and obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
The application also provides an image sending method, which is applied to a distribution end, and the method comprises the following steps: obtaining a first image, wherein the first image is used for representing the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object; and sending the first image to a server, where the first image is used to generate guidance information, and the guidance information is used to guide a user to determine the target distribution object at a location where the target distribution address is located in a user side, where the user is a user who receives the target distribution object, and the user side corresponds to the user.
The present application also provides a guidance information transmitting apparatus, including: a first image obtaining unit, configured to obtain a first image sent by a delivery end, where the first image is an image indicating a placement position of a target delivery object at a target delivery address, and the delivery end corresponds to a delivery resource that delivers the target delivery object; a second image obtaining unit, configured to obtain a second image sent by a user side, where the second image is the scene image of the placed position, and the user side corresponds to a user that receives the target distribution object; a guiding information obtaining unit, configured to obtain guiding information according to the first image and the second image, where the guiding information is used to guide the user to determine the target distribution object at a placement position of the target distribution address in the user side; and the guiding information sending unit is used for sending the guiding information to the user side.
The present application further provides a device for determining a target distribution object, which is applied to a user side, and the device includes: the second image sending unit is used for sending a second image to the server side, wherein the second image is a scene image of a target distribution object at a placed position of a target distribution address, and the user side corresponds to a user receiving the target distribution object; a guiding information obtaining unit, configured to obtain guiding information returned by the server for the second image; a guidance information presentation unit configured to present the guidance information in the second image, the guidance information being guidance information for determining the target delivery object at a placement position of the target delivery address.
The present application further provides a device for determining a target distribution object, which is applied to a user side, and the device includes: the system comprises a first image obtaining unit, a first image obtaining unit and a second image obtaining unit, wherein the first image obtaining unit is used for obtaining a first image sent by a server side, the first image is an image which is collected by a distribution side and used for representing the placement position of a target distribution object at a target distribution address, and the distribution side corresponds to distribution resources for distributing the target distribution object; a second image obtaining unit, configured to obtain a second image, where the second image is a scene image of a target distribution object at a location where the target distribution object is located, and the user side corresponds to a user that receives the target distribution object; a guide information obtaining unit configured to obtain guide information from the first image and the second image; a guidance information presentation unit configured to present the guidance information in the second image, the guidance information being guidance information for determining the target delivery object at a placement position of the target delivery address.
The present application further provides an image transmission apparatus applied to a dispensing end, the apparatus including: a first image obtaining unit, configured to obtain a first image, where the first image is an image indicating a placement position of a target delivery object at a target delivery address, and the delivery end corresponds to a delivery resource that delivers the target delivery object; a first image sending unit, configured to send the first image to a server, where the first image is used to generate guidance information, and the guidance information is used to guide a user to determine the target distribution object at a location where the target distribution address is located in a user side, where the user is a user who receives the target distribution object, and the user side corresponds to the user.
The present application further provides an electronic device, comprising: a processor; a memory for storing a computer program for execution by the processor to perform the above method.
The present application also provides a storage medium storing a computer program for execution by a processor to perform the above method.
Compared with the prior art, the method has the following advantages:
the application provides a method for sending guide information, which comprises the following steps: acquiring a first image sent by a delivery end, wherein the first image is an image used for indicating the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object; acquiring a second image sent by a user side, wherein the second image is a scene image of a placed position, and the user side corresponds to a user receiving a target distribution object; according to the first image and the second image, obtaining guiding information, wherein the guiding information is used for guiding the user to determine a target distribution object at a placed position of a target distribution address in the user side; and sending the guiding information to the user side. The method obtains the scene graph corresponding to the target distribution address according to the image sent by the distribution end, and then compares the scene graph with the image sent by the user to obtain the guide information for guiding the user to determine the target object, so that the efficiency of the user to determine the target object is improved.
Drawings
Fig. 1 is a schematic diagram of a first scenario of a method for sending guidance information according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a second scenario of a method for sending guidance information according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a third scenario of a method for sending guidance information according to an embodiment of the present application.
Fig. 4 is a fourth scenario schematic diagram of a guidance information sending method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a method for sending guidance information according to a first embodiment of the present application.
Fig. 6 is a schematic diagram of a guidance information sending apparatus according to a second embodiment of the present application.
Fig. 7 is a schematic diagram of a method for determining a target delivery object in the third embodiment of the present application.
Fig. 8 is a schematic view of a target delivery object determination device provided in a fourth embodiment of the present application.
Fig. 9 is a schematic diagram of a method for determining a target delivery object according to a fifth embodiment of the present application.
Fig. 10 is a schematic view of a target delivery object determination device provided in a sixth embodiment of the present application.
Fig. 11 is a schematic diagram of an image transmission method provided in a seventh embodiment of the present application.
Fig. 12 is a schematic diagram of an image transmission apparatus provided in an eighth embodiment of the present application.
Fig. 13 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
In order to more clearly show the method for sending guidance information provided in the embodiment of the present application, an application scenario of the method for sending guidance information provided in the embodiment of the present application is first introduced. The method for sending the guide information is generally applied to take-away delivery scenes and can also be applied to logistics delivery scenes or express delivery scenes and the like.
An application scenario of the guidance information sending method provided in the embodiment of the present application is not specifically limited, and the guidance information sending method provided in the embodiment of the present application is specifically described in detail below by taking an example of applying the guidance information sending method provided in the embodiment of the present application to a takeout delivery scenario.
The execution main body of the guiding information sending method provided by the embodiment of the application can be a server side. The server is a computing device for providing data services such as data processing services and data storage services for the user, and the specific implementation manner is generally a server or a server cluster.
To facilitate an understanding of the present application, the general concepts of the present application will first be described. Please refer to fig. 1, which is a scene diagram illustrating a method for sending guidance information according to an embodiment of the present application. Fig. 1 includes: a distribution side 101, a user side 102, and a service side 103.
The distribution end may be understood as an electronic device corresponding to a natural person in real life, such as a mobile phone configured by a rider, or may be a sending module configured in a robot. The user terminal can be understood as an electronic device of the user corresponding to the order, such as a mobile phone of the user. The delivery resource may be understood as a camera module or the like provided in the rider or the robot. The target delivery address is an address corresponding to the order, such as an address of an office building. The target delivery object may be understood as corresponding take-out of the order. The placement location may be understood as a location where take-away is placed in a corresponding take-away storage area of an office building. The first image may be understood as an image of a takeaway taken by a rider placed in a corresponding takeaway storage area of an office building. The second image can be understood as a scene image which is shot by the user and contains the address of the takeaway.
Firstly, a distribution end sends a first image to a server end, wherein the first image is a scene image of the position of a target object distributed by the distribution end in a target distribution address. And then, the user side sends a second image to the server side, wherein the second image is a scene image at the position of the target object. And finally, the server side obtains guiding information for guiding the user to determine the target object according to the second image and the first image.
It should be noted that the order information corresponding to the first image is the same as the order information corresponding to the second image, that is, both the orders are orders for the target object.
For better illustration of the present application, the following detailed description is made through a meal package delivery scenario. The following description is made specifically for a scenario in which the user determines a parcel post delivered by a saddle post in a parcel post area.
Firstly, the server side obtains a preset corresponding relation according to an image sent by a rider, wherein the preset corresponding relation is a relation between a meal packet and a background corresponding to the meal packet.
The preset correspondence may be obtained by the following method.
The method comprises the following steps: the server obtains images sent by a plurality of riders, and obtains a preset corresponding relation according to the images, as shown in fig. 2.
For example, firstly, a first meal bag, a second meal bag and a third meal bag are respectively placed on the left side of a meal bag storage area corresponding to an office building by a first rider, a second rider and a third rider. Then, the first rider, the second rider and the third rider respectively take images of the respectively distributed dinner packages. Subsequently, the rider a, the rider b and the rider c upload the captured images a 201, b 202 and c 203 to the server, respectively. Then, the server-side respectively segments the received image A, image B and image C into a meal package part and a background part (except the meal package part, the meal package part is the background part), for example, the meal package A in the image A is taken as a main body, and the meal package B and the meal package C are taken as backgrounds; similarly, the image B and the image C are taken as main bodies, the image B and the image C are taken as backgrounds, the image C and the image C are taken as main bodies, and the image B and the image C are taken as backgrounds. Then, the first meal bag, the second meal bag and the third meal bag are all located at a certain position on the left side of the meal bag storage area. Therefore, the server can obtain the whole scene image 204 for the position according to the image A, the image B and the image C, and further extend the method to each position in the meal package storage area to obtain the whole scene image for the meal package storage area. That is to say, for a certain meal package information, the background information corresponding to the meal package object can be determined, so that the user can determine the searched meal package as soon as possible.
And secondly, the server side obtains the images sent by the single rider and obtains a preset corresponding relation according to the images.
For example, firstly, the rider puts the dinner bag nail on the left side of the dinner bag storage area corresponding to the office building. Then, the rider A carries out multi-angle shooting on the food package A, and then images A, B, C and the like are obtained. Then, the rider A uploads the shot images A, B, C and the like to the server. Then, the server-side respectively segments the received image A, image B, image C and the like into a meal bag part and a background part (except the meal bag part, the meal bag part and the background part are all background parts), for example, the meal bag A in the image A is taken as a main body, and the meal bag B and an office building door are taken as a background; similarly, the image B Chinese meal bag B is taken as a main body, the image C Chinese meal bag C and the road are taken as backgrounds, the image C Chinese meal bag C is taken as a main body, and the meal bag A, the meal bag B and the wall surface are taken as backgrounds. Then, the server side can obtain the whole scene image aiming at the position according to the image A, the image B, the image C and the like, and further extend the method to each position in the meal package storage area to obtain the whole scene image aiming at the meal package storage area.
The image taken by the rider may be a picture or a video.
It should be noted that, in order to obtain a richer and more accurate meal package storage area, the server may further construct an AR (Augmented Reality) image of the meal package delivery area according to the image sent by the rider.
Secondly, the server side obtains a first image shot by the rider. The first image is a scene image of the dinner packet in the dinner packet storage area, which is shot by the rider packet, as shown in fig. 3.
It should be noted that the first image 301 sent by the rider can be one or more pictures or a video.
In specific implementation, responding to the trigger operation of a rider T for a target application, and displaying an interface corresponding to the target application at a user side corresponding to the rider T, wherein the target application comprises order information corresponding to a meal packet T; responding to the trigger operation of the rider on the shooting icon on the interface corresponding to the target application, and displaying a shooting interface for shooting; obtaining a first image in response to a trigger operation of a rider on a determined icon on the shooting interface; if the application program is detected to contain a plurality of orders, displaying the orders; and in response to the determination triggering operation of the rider for the order corresponding to the dinner packet in the plurality of orders, sending the first image to a server as the image corresponding to the dinner packet.
Or responding to the trigger operation of the rider T for the target application, and displaying an interface corresponding to the target application at a user side corresponding to the rider T, wherein the target application comprises order information corresponding to the dinner packet T; responding to the trigger operation of the T-rider on the interface corresponding to the target application and the order corresponding to the T-menu, and displaying the order detail page corresponding to the T-menu; responding to the trigger operation of the T-rider for the shooting icon on the order detail page corresponding to the T-dinner pack, and displaying a shooting interface for shooting; responding to the trigger operation of the rider on the determined icon on the shooting interface, obtaining a first image, and sending the first image to a server as an image corresponding to the food package. Of course, other third images obtained by the rider may also be obtained in accordance with the above.
Since there may be multiple meal packages in the first image of the diced rider, in order to determine that the first image sent by the diced rider is an image for a diced meal package, it may be determined by: determining the diced food package according to the merchant description information or the diced food package description information in the order information corresponding to the first image shot by the diced rider, or determining the diced food package according to the instruction information provided by the diced rider, for example, performing selection and the like on the diced food package.
Then, the server obtains the background information corresponding to the meal package according to the first image and the preset corresponding relationship, where the background corresponding to the background information is, for example, the background 303 shown in fig. 3.
Because the preset corresponding relation is the relation between the characteristic information of each meal package in the meal package storage area and the background information corresponding to each meal package, the server can obtain the background information corresponding to each meal package according to the characteristic information of any object in the background information in the first image.
For example, besides the meal package D, a background information meal package A also exists in the first image, so that the server can obtain scene information corresponding to the meal package A according to the meal package A, such as a meal package B, a meal package C, an office building door and the like around the meal package A, and further determine the background information of the meal package D (the meal package A, the meal package B, the meal package C and the office building door).
It should be noted that, since the packages of the meal packages are different from one another, the meal package features presented in the images can be understood as the shapes, sizes, colors, merchant icons, and the like of the meal package packages, as long as the obvious distinguishing features are presented through the images. In addition, some meal packages have special requirements, such as ice cream needs to be sent at low temperature, so the meal packages are necessarily packaged with special protection, such as a heat insulation box or a foam box containing an ice bag, and the like, so the meal package characteristics presented in the images can also be understood as additional special characteristics according to the special properties of the meal packages.
It should be noted that, after the rider t sends the first image to the server, the saddle t also becomes a part of the preset corresponding relationship, and further obtains the updated preset corresponding relationship.
Of course, when the first image is uploaded by the rider, other riders upload other takeout images, namely, third images, which are placed at the same position, and by the method, the scene image at the current position of the dinner party can be obtained according to the first image and the third images.
Next, guidance information is generated.
The guidance information in the present application may be obtained from the above-described first image transmitted by the rider and the second image transmitted by the user. The second image is a position scene image which is shot by the user and just contains the diced dinner party.
In specific implementation, responding to a trigger operation of a user for a target application, and displaying an interface corresponding to the target application, wherein the target application comprises order information corresponding to a food package; responding to the triggering operation of a user on a shooting icon on an interface corresponding to the target application, and displaying a shooting interface for shooting; obtaining a second image in response to a trigger operation of a user for a determined icon on the shooting interface; if the application program is detected to contain a plurality of orders, displaying the orders; and responding to the determination triggering operation of the user for the order corresponding to the meal packet dices in the plurality of orders, and sending the second image as an image for determining the meal packet dices to a server.
Or responding to a trigger operation of a user for a target application, and displaying an interface corresponding to the target application, wherein the target application comprises order information corresponding to the diced dinner party; responding to a triggering operation of a user for an order corresponding to the diced meal on an interface corresponding to the target application, and displaying an order detail page corresponding to the diced meal; responding to the triggering operation of the user on the shooting icon on the order detail page corresponding to the dinner party, and displaying a shooting interface for shooting; and responding to the triggering operation of the user on the determination icon on the shooting interface, obtaining a second image, and sending the second image to the server as an image for determining the food package. Of course, the fourth image obtained by the user may also be obtained according to the above-described manner.
The second image described above is the second image when the user takes the image of exactly the existence of a meal package. At this time, if only one meal packet exists in the second image or a plurality of meal packets exist in the second image but the characteristics of only one meal packet are matched with the diced meal packet, the meal packet is taken as the diced meal packet to be received, and then the guiding information is obtained according to the position information of the diced meal packet to be received in the second image. If a plurality of meal packages exist in the second image and the characteristics of the meal packages are matched with the meal package dices, the meal package with the highest matching degree is used as the meal package dices to be received, and then guiding information is obtained according to the position information of the meal package dices to be received in the second image.
However, in real life, there is probably no food package, i.e., the fourth image, 302 shown in fig. 3, in the image taken by the user. It should be noted that the fourth image taken by the user is also for the meal package. At the moment, the server side obtains guiding information for guiding the user to determine the position of the dinner packet block according to the fourth image shot by the rider block and the obtained background information corresponding to the dinner packet block.
If the similarity between the road displayed in the fourth image and the background of the dinner packet D (the dinner packet A, the dinner packet B, the dinner packet C and the entrance door of the office building) is judged, and if the similarity does not reach the similarity threshold value, the dinner packet D is not in the second image. Next, the server generates and sends an image containing the guidance information to the user according to the relative position relationship between the highway and the background of the dinner party (the dinner party a, the dinner party b, the dinner party c and the office building door) displayed in the fourth image, where the image is shown as an image 304 containing the guidance information in fig. 3, and the guidance information is used for guiding the user to reach the background position corresponding to the dinner party. The guidance information may be a guidance icon such as an arrow, or may be voice information, which is not limited to this, as long as the user can be guided to determine the food package.
When the user arrives at the background position corresponding to the dinner packet dices, the second image displayed by the user side can display the determined prompt information to prompt the user that the dinner packet dices are in the current image. The prompt message may be a text prompt message, or a voice prompt message, or a second image is highlighted, such as a bold frame, or a still image is replaced by a shake image, and the like, which is not limited.
In order to enable the efficiency of searching for the food packages by the user to be better, the method and the device can identify each food package in the image shot by the user at present after the user reaches the background position corresponding to the food package dices, and finally identify the food package dices. The method specifically comprises the following steps: and judging according to the characteristic information of the meal packet dices and the characteristic information of each meal packet in the currently shot image, wherein the meal packet with the highest similarity index is the meal packet with the most similar meal packet dices. The most similar meal package is usually highlighted, such as circle selection, as long as the meal package can be distinguished from other meal packages, and the identification is convenient. The characteristic information of the diced food is determined according to the order information of the user.
Preferably, in order to facilitate a user to search for the meal packages more efficiently, the storage area of the meal packages can be partitioned left and right according to the takeaway temperature, for example, low-temperature meal packages such as ice cream and frozen food materials are placed on the left side of the storage area of the meal packages, room-temperature meal packages such as milk tea are placed on the right side of the storage area of the meal packages, and the like, and certainly, the meal packages can be partitioned according to the takeaway taste and other attributes. Therefore, the approximate position of a certain meal packet in the background can be quickly determined according to the attribute of the meal packet. For example, the diced meal packages are ice cream, and the characteristic information of the diced meal packages in the background information is displayed as the freezing store, so that the diced meal packages can be directly determined to be on the left side of the diced meal storage area, and the efficiency of determining the diced meal packages is further improved.
It should be further noted that the above process of obtaining the guidance information by using the server can also be executed at the user side. For example, the server side sends the background information corresponding to the meal package to the user side, and the user side can obtain the guidance information by comparing the shot second image or fourth image with the background information or feature information corresponding to the meal package. Since the present step is similar to the step performed by the server, it is not described herein too much.
It should be further noted that, in order to ensure the accuracy of providing the guidance information, after the user has received a diced meal, the related information of the diced meal in the storage area needs to be deleted, and an updated scene image is obtained, as shown in fig. 4, where the image 401 is an image of a scene before the diced meal is not received, and the image 402 is an image of the related information of the diced meal after the diced meal is received and deleted.
For a better understanding of the present application, the following is described again by way of examples.
First embodiment
A method for sending guidance information is provided in a first embodiment of the present application, and is described below with reference to fig. 5. Since the first embodiment is similar to the scenario embodiment, please refer to the scenario description.
The application provides a method for sending guide information, which comprises the following steps:
step S501, obtaining a first image sent by a delivery end, wherein the first image is an image used for representing the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object;
step S502, a second image sent by a user side is obtained, wherein the second image is the scene image of the placed position, and the user side corresponds to a user receiving the target distribution object;
step S503, obtaining guiding information according to the first image and the second image, where the guiding information is used to guide the user to determine the target distribution object at the location of the target distribution address in the user side;
step S504, sending the guiding information to the user side.
Optionally, the obtaining the first image sent by the delivery end includes: the first image is determined from a plurality of images which are sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or the first image is obtained from a video file which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or one image which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address is determined as the first image.
Optionally, the method further includes: obtaining a scene video file or a plurality of scene images which are sent by the user side and used for representing the target distribution address; the obtaining of the second image sent by the user side includes: the second image is obtained from the scene video file or a plurality of scene images.
Optionally, the method further includes: obtaining the information of the placement position of the target delivery object at the target delivery address according to the first image; the obtaining the second image from the scene video file or a plurality of scene images comprises: identifying location information of the target delivery address represented by each image of the plurality of images of the scene video file or each image of the plurality of scene images; obtaining an image whose position information matches the placed position information from the plurality of frame images or the plurality of scene images as the second image.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the method further includes: obtaining identification information of an order corresponding to the first image sent by the distribution end; and obtaining the identification information of the order corresponding to the second image sent by the user side.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a third image sent by other delivery ends, wherein the third image is an image used for representing other delivery objects at the placed position, and the other delivery ends correspond to delivery resources for delivering the other delivery objects; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the method further includes: obtaining a third image sent by other delivery ends, wherein the third image is an image used for representing other delivery objects at the placed position, and the other delivery ends correspond to delivery resources for delivering the other delivery objects; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a fourth image sent by the user side, wherein the fourth image is a scene image of other positions of the target distribution address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information; and sending the guiding information for guiding the user to determine the placed position in the user side to the user side.
Optionally, if the number of the to-be-picked delivery objects with the highest feature matching degree with the target delivery object is multiple, the to-be-picked delivery object with the highest feature matching degree with the target delivery object is taken as the delivery object with the highest matching degree, and the method further includes: identifying first background information or a first reference object in the first image; obtaining a first relative position relationship between the target distribution object in the first image and first background information or the first reference object in the first image; identifying second background information or a second reference object in the second image, wherein the second background information or the second reference object is the same as the first background information or the first reference object in the first image; respectively obtaining a second relative position relation between each distribution object to be picked with the highest matching degree and the second background information or the second reference object; and taking the delivery object to be picked up with the highest matching degree in a second relative position relation which is the same as the first relative position relation as the target delivery object identified from the second image.
Optionally, the method further includes: and if the first image has a plurality of delivery objects, identifying a target delivery object from the plurality of delivery objects in the first image according to indication information which is sent by the delivery end and used for indicating the target delivery object in the plurality of delivery objects.
Optionally, the method further includes: and if the first image has a plurality of distribution objects, identifying a target distribution object from the plurality of distribution objects in the first image according to merchant description information or target distribution object description information in an order corresponding to the first image.
Second embodiment
Corresponding to the guidance information sending method provided in the first embodiment of the present application, a second embodiment of the present application also provides a guidance information sending apparatus. Since the embodiment of the apparatus is substantially similar to the first embodiment of the present application, it is relatively simple to describe, and please refer to the description of the first embodiment of the present application for relevant points. The device embodiments described below, as shown in fig. 6, are merely illustrative.
A second embodiment of the present application provides a guidance information transmitting apparatus, including: a first image obtaining unit 601, configured to obtain a first image sent by a delivery end, where the first image is an image indicating a placement position of a target delivery object at a target delivery address, and the delivery end corresponds to a delivery resource that delivers the target delivery object; a second image obtaining unit 602, configured to obtain a second image sent by a user side, where the second image is the scene image of the placed position, and the user side corresponds to a user that receives the target distribution object; a guiding information obtaining unit 603, configured to obtain guiding information according to the first image and the second image, where the guiding information is used to guide the user to determine the target distribution object at the placement position of the target distribution address in the user side; a guiding information sending unit 604, configured to send the guiding information to the user side.
Optionally, the obtaining the first image sent by the delivery end includes: the first image is determined from a plurality of images which are sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or the first image is obtained from a video file which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or one image which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address is determined as the first image.
Optionally, the apparatus is further configured to obtain a scene video file or a plurality of scene images sent by the user side and used for representing the target delivery address; the obtaining of the second image sent by the user side includes: the second image is obtained from the scene video file or a plurality of scene images.
Optionally, the device is further configured to obtain, according to the first image, placement position information of the target delivery object at a target delivery address; the obtaining the second image from the scene video file or a plurality of scene images comprises: identifying location information of the target delivery address represented by each image of the plurality of images of the scene video file or each image of the plurality of scene images; obtaining an image whose position information matches the placed position information from the plurality of frame images or the plurality of scene images as the second image.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the apparatus is further configured to obtain identification information of an order corresponding to the first image sent by the delivery end; and obtaining the identification information of the order corresponding to the second image sent by the user side.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the apparatus is further configured to obtain a third image sent by another delivery end, where the third image is an image used to represent that another delivery object is at the placed position, and the other delivery end corresponds to a delivery resource that delivers the other delivery object; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the apparatus is further configured to obtain a third image sent by another delivery end, where the third image is an image used to represent that another delivery object is at the placed position, and the other delivery end corresponds to a delivery resource that delivers the other delivery object; the obtaining of the guidance information according to the first image and the second image includes: generating a scene image of the placed position as a generated scene image according to the first image and the third image; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the apparatus is further configured to obtain a fourth image sent by the user side, where the fourth image is a scene image of another location of the target distribution address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information; and sending the guiding information for guiding the user to determine the placed position in the user side to the user side.
Optionally, if the number of the to-be-picked delivery objects with the highest feature matching degree with the target delivery object is multiple, the to-be-picked delivery object with the highest feature matching degree with the target delivery object is taken as the delivery object with the highest matching degree, and the method further includes: identifying first background information or a first reference object in the first image; obtaining a first relative position relationship between the target distribution object in the first image and first background information or the first reference object in the first image; identifying second background information or a second reference object in the second image, wherein the second background information or the second reference object is the same as the first background information or the first reference object in the first image; respectively obtaining a second relative position relation between each distribution object to be picked with the highest matching degree and the second background information or the second reference object; and taking the delivery object to be picked up with the highest matching degree in a second relative position relation which is the same as the first relative position relation as the target delivery object identified from the second image.
Optionally, the apparatus is further configured to, if there are multiple delivery objects in the first image, identify a target delivery object from the multiple delivery objects in the first image according to indication information sent by the delivery end and used for indicating the target delivery object in the multiple delivery objects.
Optionally, the apparatus is further configured to, if the first image has a plurality of delivery objects, identify a target delivery object from the plurality of delivery objects in the first image according to merchant description information or target delivery object description information in an order corresponding to the first image.
Third embodiment
A method for determining a target delivery destination according to a third embodiment of the present application is described below with reference to fig. 7. Since the third embodiment is similar to the scenario embodiment, please refer to the scenario description.
The application provides a method for determining a target distribution object, which is applied to a user side, and the method comprises the following steps:
step S701, sending a second image to a server, wherein the second image is a scene image of a target distribution object at a placement position of a target distribution address, and the user side corresponds to a user receiving the target distribution object;
step S702, obtaining the guide information returned by the server aiming at the second image;
step S703 of displaying the guidance information in the second image, where the guidance information is guidance information for determining the target delivery destination at the placement position of the target delivery address.
Optionally, the obtaining of the guidance information returned by the server for the second image includes: and acquiring guiding information which is returned by the server and is acquired according to a first image and the second image, wherein the first image is an image which is sent by a delivery end and used for representing the placement position of the target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the method further includes: sending a fourth image to a server, where the fourth image is a scene image of another location of the target distribution address, the client corresponds to a user receiving the target distribution object, identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image, the first image is an image sent by a distribution end and used for indicating a location where the target distribution object is placed at the target distribution address, and the distribution end corresponds to a distribution resource distributing the target distribution object; acquiring guide information returned by the server aiming at the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
Fourth embodiment
Corresponding to the method for determining target delivery objects provided in the third embodiment of the present application, a fourth embodiment of the present application also provides a method for determining target delivery objects. Since the embodiment of the apparatus is substantially similar to the third embodiment of the present application, it is relatively simple to describe, and please refer to the part of the description provided for the third embodiment of the present application for the relevant point. The device embodiments described below, as shown in fig. 8, are merely illustrative.
The application provides a target distribution object's confirming device, is applied to the user, the device includes: a second image sending unit 801, configured to send a second image to a server, where the second image is a scene image of a target distribution object at a location of a target distribution address, and the user side corresponds to a user that receives the target distribution object; a guiding information obtaining unit 802, configured to obtain guiding information returned by the server for the second image; a guidance information presentation unit 803 that presents, in the second image, guidance information for specifying the target delivery target at the placement position of the target delivery address.
Optionally, the obtaining of the guidance information returned by the server for the second image includes: and acquiring guiding information which is returned by the server and is acquired according to a first image and the second image, wherein the first image is an image which is sent by a delivery end and used for representing the placement position of the target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Optionally, the apparatus is further configured to send a fourth image to a server, where the fourth image is a scene image of another location of the target distribution address, the user corresponds to a user that receives the target distribution object, identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image, the first image is an image that is sent by a distribution end and used for indicating a location where the target distribution object is placed at the target distribution address, and the distribution end corresponds to a distribution resource that distributes the target distribution object; acquiring guide information returned by the server aiming at the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
Fifth embodiment
A method for determining a target delivery destination according to a fifth embodiment of the present application is described below with reference to fig. 9. Since the fifth embodiment is similar to the scenario embodiment, please refer to the scenario description above
The application provides a method for determining a target distribution object, which is applied to a user side, and the method comprises the following steps:
step S901, obtaining a first image sent by a server, where the first image is an image collected by a delivery end and used for indicating a placement position of a target delivery object at a target delivery address, and the delivery end corresponds to a delivery resource for delivering the target delivery object;
step S902, obtaining a second image, wherein the second image is a scene image of a target distribution object at a placement position of a target distribution address, and the user side corresponds to a user receiving the target distribution object;
step S903, acquiring guide information according to the first image and the second image;
step S904, the guidance information is displayed in the second image, and the guidance information is guidance information for determining the target delivery object at the placement position of the target delivery address.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image of the placed position; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the method further includes: obtaining a fourth image, wherein the fourth image is a scene image of other positions of the target delivery address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; acquiring guide information according to the first image and the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
Optionally, the obtaining guidance information according to the first image and the fourth image includes: obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; and obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Sixth embodiment
Corresponding to the method for determining target delivery objects provided in the fifth embodiment of the present application, a sixth embodiment of the present application further provides a method for determining target delivery objects. Since the embodiment of the apparatus is substantially similar to the fifth embodiment of the present application, it is relatively simple to describe, and please refer to some descriptions provided for the fifth embodiment of the present application. The device embodiments described below, as shown in fig. 10, are merely illustrative.
The present application further provides a device for determining a target distribution object, which is applied to a user side, and the device includes: a first image obtaining unit 1001, configured to obtain a first image sent by a server, where the first image is an image collected by a delivery end and used for representing a placement position of a target delivery object at a target delivery address, and the delivery end corresponds to a delivery resource that delivers the target delivery object; a second image obtaining unit 1002, configured to obtain a second image, where the second image is a scene image of a target distribution object at a location where the target distribution object is placed in a target distribution address, and the user side corresponds to a user that receives the target distribution object; a guidance information obtaining unit 1003 configured to obtain guidance information from the first image and the second image; a guidance information presentation unit 1004 configured to present, in the second image, guidance information for specifying the target delivery object at the placement position of the target delivery address.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: identifying the target delivery object from the first image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image of the placed position; identifying a plurality of delivery objects to be picked up from the second image; determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image; and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
Optionally, the obtaining guidance information according to the first image and the second image includes: acquiring a generated scene image of a placed position sent by a server, wherein the generated scene image of the placed position is acquired according to the first image and a third image, the third image is an image used for representing other distribution objects at the placed position, and the other distribution ends correspond to distribution resources for distributing the other distribution objects; identifying the target delivery object from the generated scene image; if a delivery object to be picked up is identified from the second image and the matching degree between the characteristics of the delivery object to be picked up and the characteristics of the target delivery object meets a preset matching degree qualified condition, determining that the delivery object to be picked up is the target delivery object; and acquiring the guide information according to the position information of the distribution object to be obtained in the second image.
Optionally, the apparatus is further configured to obtain a fourth image, where the fourth image is a scene image of another location of the target delivery address, and identification information of an order corresponding to the fourth image is the same as identification information of an order corresponding to the first image; acquiring guide information according to the first image and the fourth image; presenting the guide information in the fourth image, the guide information being guide information for determining the placed position.
Optionally, the obtaining guidance information according to the first image and the fourth image includes: obtaining the position information of the target delivery objects at the placement positions of the target delivery addresses according to the first images; obtaining other position information of the other position represented by the fourth image; and obtaining guiding information for guiding the user to determine the placed position in the user side according to the position information of the placed position and the other position information.
Optionally, the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target distribution object.
Seventh embodiment
A seventh embodiment of the present application provides an image transmission method, which is described below with reference to fig. 11. Since the seventh embodiment is similar to the scenario embodiment, please refer to the scenario description above for related points
The application also provides an image sending method, which is applied to a distribution end, and the method comprises the following steps:
step S1101 of obtaining a first image, where the first image is an image indicating a placement position of a target delivery object at a target delivery address, and the delivery side corresponds to a delivery resource that delivers the target delivery object;
step S1102 is to send the first image to a server, where the first image is used to generate guidance information, and the guidance information is used to guide a user to determine the target distribution object at a location where the target distribution address is located in a user side, where the user is a user who receives the target distribution object, and the user side corresponds to the user.
Eighth embodiment
Corresponding to the image transmission method provided in the seventh embodiment of the present application, an eighth embodiment of the present application also provides an image transmission apparatus. Since the embodiment of the apparatus is substantially similar to the seventh embodiment of the present application, it is relatively simple to describe, and please refer to the part of the description provided for the first embodiment of the present application. The device embodiments described below, as shown in fig. 12, are merely illustrative.
An eighth embodiment of the present application further provides an image transmission apparatus applied to a delivery side, the apparatus including: a first image obtaining unit 1201, configured to obtain a first image, where the first image is an image indicating a placement position of a target delivery object at a target delivery address, and the delivery side corresponds to a delivery resource that delivers the target delivery object; a first image sending unit 1202, configured to send the first image to a server, where the first image is used to generate guidance information, and the guidance information is used to guide a user to determine the target distribution object at a location where the target distribution address is located in a user side, where the user is a user who receives the target distribution object, and the user side corresponds to the user.
Ninth embodiment
Corresponding to the above method embodiments provided by the present application, a ninth embodiment of the present application further provides an electronic device. Since the ninth embodiment is substantially similar to the above method embodiment provided in this application, the description is relatively simple, and the relevant points can be referred to the partial description of the above method embodiment provided in this application. The ninth embodiment described below is merely illustrative.
Fig. 13 is a schematic diagram of an electronic device provided in an embodiment of the present application.
The electronic device includes: the method comprises the following steps: at least one processor 1301, at least one communication interface 1302, at least one memory 1303, and at least one communication bus 1304; optionally, the communication interface 1302 may be an interface of a communication module, such as an interface of a GSM module; the processor 1301 may be a processor CPU, or an application Specific Integrated circuit (asic), or one or more Integrated circuits configured to implement an embodiment of the present invention. The memory 1303 may comprise a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The memory 1303 stores a program, and the processor 1301 invokes the program stored in the memory 1303 to execute the method for sending the guidance information according to the embodiment of the present invention.
It should be noted that, for the detailed description of the electronic device provided in the ninth embodiment of the present application, reference may be made to the related description of the foregoing method embodiment provided in the present application, and details are not repeated here.
Tenth embodiment
Corresponding to the above method embodiments provided by the present application, a tenth embodiment of the present application further provides a storage medium. Since the tenth embodiment is substantially similar to the above method embodiment provided in this application, it is described relatively simply, and reference may be made to some descriptions of the above method embodiment provided in this application for relevant points. The tenth embodiment described below is merely illustrative.
The storage medium stores a computer program that is executed by a processor to perform the methods provided in the above-described embodiments of the present application.
It should be noted that, for the detailed description of the storage medium provided in the eighth embodiment of the present application, reference may be made to the related description of the foregoing method embodiment provided in the present application, and details are not repeated here
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (10)

1. A method for transmitting guidance information, comprising:
the method comprises the steps of obtaining a first image sent by a delivery end, wherein the first image is an image used for representing the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object;
obtaining a second image sent by a user side, wherein the second image is the scene image of the placed position, and the user side corresponds to a user receiving the target distribution object;
obtaining guiding information according to the first image and the second image, wherein the guiding information is used for guiding the user to determine the target distribution object at the placed position of the target distribution address in the user side;
and sending the guiding information to the user side.
2. The method of claim 1, wherein obtaining the first image sent by the distribution terminal comprises: the first image is determined from a plurality of images which are sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or the first image is obtained from a video file which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address, or one image which is sent by the delivery end and used for representing the placing position of the target delivery object at the target delivery address is determined as the first image.
3. The method of claim 1, further comprising: obtaining a scene video file or a plurality of scene images which are sent by the user side and used for representing the target distribution address;
the obtaining of the second image sent by the user side includes: the second image is obtained from the scene video file or a plurality of scene images.
4. The method of claim 3, further comprising: obtaining the information of the placement position of the target delivery object at the target delivery address according to the first image;
the obtaining the second image from the scene video file or a plurality of scene images comprises:
identifying location information of the target delivery address represented by each image of the plurality of images of the scene video file or each image of the plurality of scene images;
obtaining an image whose position information matches the placed position information from the plurality of frame images or the plurality of scene images as the second image.
5. The method according to claim 1, wherein the identification information of the order corresponding to the first image is the same as the identification information of the order corresponding to the second image, and the order corresponding to the first image and the order corresponding to the second image correspond to the target delivery object.
6. The method of claim 5, further comprising:
obtaining identification information of an order corresponding to the first image sent by the distribution end;
and obtaining the identification information of the order corresponding to the second image sent by the user side.
7. The method of claim 1, wherein obtaining guidance information from the first image and the second image comprises:
identifying the target delivery object from the first image;
identifying a plurality of delivery objects to be picked up from the second image;
determining a delivery object to be picked up with the highest feature matching degree with the target delivery object from the plurality of delivery objects to be picked up as the target delivery object identified from the second image;
and obtaining the guiding information according to the position information of the distribution object to be obtained with the highest feature matching degree in the second image.
8. A method for determining a target delivery object is applied to a user side, and the method comprises the following steps:
sending a second image to a server, wherein the second image is a scene image of a target distribution object at a placed position of a target distribution address, and the user side corresponds to a user receiving the target distribution object;
acquiring guide information returned by the server aiming at the second image;
the guidance information is presented in the second image, and the guidance information is guidance information for determining the target delivery object at the placement position of the target delivery address.
9. A method for determining a target delivery object is applied to a user side, and the method comprises the following steps:
the method comprises the steps of obtaining a first image sent by a server, wherein the first image is an image which is collected by a distribution end and used for representing the placement position of a target distribution object at a target distribution address, and the distribution end corresponds to distribution resources for distributing the target distribution object;
acquiring a second image, wherein the second image is a scene image of a target distribution object at a placement position of a target distribution address, and the user side corresponds to a user receiving the target distribution object;
acquiring guide information according to the first image and the second image;
the guidance information is presented in the second image, and the guidance information is guidance information for determining the target delivery object at the placement position of the target delivery address.
10. An image sending method is applied to a distribution end, and comprises the following steps:
obtaining a first image, wherein the first image is used for representing the placement position of a target delivery object at a target delivery address, and the delivery end corresponds to delivery resources for delivering the target delivery object;
and sending the first image to a server, where the first image is used to generate guidance information, and the guidance information is used to guide a user to determine the target distribution object at a location where the target distribution address is located in a user side, where the user is a user who receives the target distribution object, and the user side corresponds to the user.
CN202111188233.5A 2021-10-12 2021-10-12 Method and device for sending guide information and electronic equipment Pending CN113869833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111188233.5A CN113869833A (en) 2021-10-12 2021-10-12 Method and device for sending guide information and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111188233.5A CN113869833A (en) 2021-10-12 2021-10-12 Method and device for sending guide information and electronic equipment

Publications (1)

Publication Number Publication Date
CN113869833A true CN113869833A (en) 2021-12-31

Family

ID=78999183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111188233.5A Pending CN113869833A (en) 2021-10-12 2021-10-12 Method and device for sending guide information and electronic equipment

Country Status (1)

Country Link
CN (1) CN113869833A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117217864A (en) * 2023-09-11 2023-12-12 广东海洋大学 Intelligent machine control method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117217864A (en) * 2023-09-11 2023-12-12 广东海洋大学 Intelligent machine control method and related equipment
CN117217864B (en) * 2023-09-11 2024-05-10 广东海洋大学 Intelligent machine control method and related equipment

Similar Documents

Publication Publication Date Title
CN103377287B (en) A kind of method and apparatus throwing in Item Information
US10552476B2 (en) System and method of identifying visual objects
US10863310B2 (en) Method, server and terminal for information interaction
WO2020044097A1 (en) Method and apparatus for implementing location-based service
US20130216143A1 (en) Systems, circuits, and methods for efficient hierarchical object recognition based on clustered invariant features
CN107679560B (en) Data transmission method and device, mobile terminal and computer readable storage medium
CN109379639B (en) Method and device for pushing video content object and electronic equipment
EP2827617A1 (en) Search method, client, server and search system for mobile augmented reality
US20230123879A1 (en) Method and apparatus for positioning express parcel
CN104023181A (en) Information processing method and device
CN113869833A (en) Method and device for sending guide information and electronic equipment
CN110049180A (en) Shoot posture method for pushing and device, intelligent terminal
US8755605B2 (en) System and method for compact descriptor for visual search
US9282331B2 (en) Image processing method and electronic device
CN105589873B (en) Data searching method, terminal and server
WO2021196551A1 (en) Image retrieval method and apparatus, computer device, and storage medium
CN113220684A (en) Data packet storage and query method, device, system and storage medium
EP2975576A1 (en) Method of determination of stable zones within an image stream, and portable device for implementing the method
CN104850600B (en) A kind of method and apparatus for searching for the picture comprising face
US10388051B2 (en) Picture synthesis method and apparatus, instant communication method, and picture synthesis server
CN115661624A (en) Digital method and device for goods shelf and electronic equipment
KR101873246B1 (en) Method for searching product classification and providing shopping data based on object recognition, server and system thereof
CN110851704A (en) Multi-class service system and method
CN113422826A (en) Information pushing method and device
CN111262996A (en) Notification bar message processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination