CN111429194B - User track determination system, method, device and server - Google Patents

User track determination system, method, device and server Download PDF

Info

Publication number
CN111429194B
CN111429194B CN201910019396.7A CN201910019396A CN111429194B CN 111429194 B CN111429194 B CN 111429194B CN 201910019396 A CN201910019396 A CN 201910019396A CN 111429194 B CN111429194 B CN 111429194B
Authority
CN
China
Prior art keywords
user
shooting
spatial position
determining
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910019396.7A
Other languages
Chinese (zh)
Other versions
CN111429194A (en
Inventor
沈飞
姜文晖
汪玲
赵小伟
刘扬
文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910019396.7A priority Critical patent/CN111429194B/en
Publication of CN111429194A publication Critical patent/CN111429194A/en
Application granted granted Critical
Publication of CN111429194B publication Critical patent/CN111429194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Finance (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a user track determining system, a method, a device and a server, wherein the system comprises: a shooting device group and a server; the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server; the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device; the server is used for determining the movement track of the user according to the obtained video stream shot by the shooting equipment group. The user track determining system, the user track determining method, the user track determining device and the server provided by the embodiment of the invention realize the determination of the user moving track, are convenient to know the user behavior, are not perceived by the user and improve the user experience.

Description

User track determination system, method, device and server
Technical Field
The invention relates to the technical field of electronics, in particular to a user track determining system, a method, a device and a server.
Background
In order to improve the efficiency of user checkout queuing under the line and save labor cost, unmanned settlement schemes are increasingly proposed, and the core problem of unmanned settlement is how to efficiently solve the identity confirmation in the processes of user settlement and goods shelf purchase.
In an unmanned retail store, subsequent operations such as settlement can be completed only by knowing the behavior of each user in the shopping process, and at present, a plurality of schemes based on face recognition are used for identifying the user at fixed points so as to determine the behavior of the user, but the user needs to be perceptually matched, and the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present invention provide a system, a method, a device and a server for determining a user trajectory, so as to determine a user movement trajectory, facilitate understanding of user behaviors, and improve user experience without user perception.
In a first aspect, an embodiment of the present invention provides a user trajectory determination system, including: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for determining the movement track of the user according to the acquired feature identification information of the user and the video stream shot by the shooting equipment group.
In a second aspect, an embodiment of the present invention provides a control system, including: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for determining the movement track of the user according to the obtained video stream shot by the shooting equipment group and identifying the commodities taken by the user in the movement track; and adding the identification information of the identified commodity into the shopping list of the user.
In a third aspect, an embodiment of the present invention provides a user positioning system, including: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot images to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for determining the spatial position of the user according to the acquired images shot by the shooting equipment group.
In a fourth aspect, an embodiment of the present invention provides a user trajectory determination method, including:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices;
wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
In a fifth aspect, an embodiment of the present invention provides a control method, including:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device; identifying the commodities taken by the user in the moving track;
and adding the identification information of the commodity obtained by identification into the shopping list of the user according to the identity information of the user.
In a sixth aspect, an embodiment of the present invention provides a user positioning method, including:
acquiring images obtained by shooting user behaviors by a plurality of shooting devices in a shooting device group;
determining the spatial position of the user according to the images shot by the plurality of shooting devices; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
In a seventh aspect, an embodiment of the present invention provides a user trajectory determination apparatus, including:
the first acquisition module is used for acquiring video streams of a plurality of shooting devices in the shooting device group shooting a user;
the track determining module is used for determining the moving track of the user according to the video streams shot by the plurality of shooting devices;
wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
In an eighth aspect, an embodiment of the present invention provides a control apparatus, including:
the first acquisition module is used for acquiring video streams of a plurality of shooting devices in the shooting device group shooting a user;
the track determining module is used for determining the moving track of the user according to the video streams shot by the shooting devices; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device;
the identification module is used for identifying the commodities taken by the user in the moving track;
and the control module is used for adding the identification information of the identified commodities into the shopping list of the user according to the identity information of the user.
In a ninth aspect, an embodiment of the present invention provides a user positioning apparatus, including:
the second acquisition module is used for acquiring images obtained by shooting user behaviors by a plurality of shooting devices in the shooting device group;
the positioning module is used for determining the spatial position of the user according to the images shot by the plurality of shooting devices; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device.
In a tenth aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the user trajectory determination method of the fourth aspect.
In an eleventh aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the control method of the fifth aspect.
In a twelfth aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the user positioning method of the sixth aspect.
In a thirteenth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to enable a computer to implement the user trajectory determination method according to the fourth aspect when executed.
In a fourteenth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to enable a computer to implement the control method according to the fifth aspect when executed.
In a fifteenth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer implement the user positioning method according to the above-mentioned sixth aspect when executed.
In the embodiment of the invention, the shooting equipment group comprises a plurality of shooting equipment, and the shooting range of each shooting equipment is at least partially overlapped with that of at least one shooting equipment, so that the server can position the user by utilizing a binocular positioning principle, so that the moving track formed by different spatial positions of the user can be determined according to the video streams shot by a plurality of videos, the position of the user can be tracked, the user behavior can be conveniently and accurately determined through the moving track of the user, a basis can be provided for subsequent settlement in the shopping process, the user does not feel, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
fig. 2 is a schematic diagram of interaction between a shooting device group and a server according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a shooting range of a shooting device group according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a first embodiment of a user trajectory determination method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a second embodiment of a user trajectory determination method according to the present invention;
fig. 6 is a schematic flowchart of a first embodiment of a control method according to the present invention;
fig. 7 is a schematic diagram illustrating a user movement track according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of a first embodiment of a user positioning method according to the present invention;
fig. 9 is a schematic structural diagram of a first embodiment of a user trajectory determination apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a first electronic device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a first control device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a second electronic device according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a first user positioning device according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a third electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of additional like elements in a commodity or system comprising the element.
The embodiment of the invention provides a user track determining system, which can construct a shooting equipment network through a plurality of shooting equipment and perform space positioning and track determination of a user in real time. The embodiment of the invention can be applied to any shopping scene, and particularly can be applied to unmanned retail stores.
The user trajectory determination system provided by the embodiment of the invention may include: a shooting device group and a server; the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server; the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device; the server is used for determining the moving track of the user according to the obtained video stream shot by the shooting equipment group.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention. As shown in fig. 1, a store may be provided with shelves on which products are placed, and a user can freely select the products on the shelves. In order to locate the user, to know the behavior of the user in the store, the movement track of the user in the store needs to be determined, and the position of the user needs to be tracked. Assuming that the ceiling of the store is 4 meters from the floor, the group of photographing apparatuses is disposed at a height of about 4 meters from the floor.
The shooting device group can comprise a plurality of shooting devices, and the shooting devices can be used for shooting the behaviors of the users and sending the shot video streams to the server for processing. Each photographing apparatus may communicate with the server in a wireless or wired manner. The server may be located in a store or may be located anywhere else.
The shooting device may include a camera or a video camera, which may be used to shoot video.
Alternatively, the server may be a physical server or a cloud server provided by a cloud computing platform, and the like.
Fig. 2 is a schematic diagram of interaction between a shooting device group and a server according to an embodiment of the present invention. As shown in fig. 2, each photographing device may transmit the photographed video stream to the server, and the server may determine the movement track of the user according to the video streams photographed by the respective photographing devices.
The setting position and the orientation angle of the photographing apparatus and the like may be set according to actual circumstances. In order to ensure that all the commodities and users in the store are monitored, the shooting range of the shooting device group can include the whole storefront. For example, if the storefront is a rectangular store of 100 × 10 meters, the imaging range of the imaging device group should cover the area of 100 × 10 meters. Since the main behavior of the user is to shop for goods while the user is shopping in the store, and the goods are disposed in the shelves, the coverage of at least part of the shooting devices in the shooting device group optionally includes the shelf disposition area in the store.
Optionally, in order to better implement the positioning and track determination for the user, the shooting ranges of different shooting devices may have a certain overlap. Fig. 3 is a schematic view of a shooting range of a shooting device group according to an embodiment of the present invention. The shooting range of the shooting apparatus is indicated by dashed-dotted oblique lines in fig. 3. As shown in fig. 3, in a rectangular store, one shooting device may be arranged at intervals of a predetermined distance in the length direction of the store, and the shooting ranges of two adjacent shooting devices have a certain overlap. It should be noted that fig. 3 is a schematic diagram of a cross section of a store, and fig. 3 is a schematic diagram of a shooting range of a possible shooting device set by way of example only, and should not be taken as a limitation to the present invention.
In an alternative implementation, the setting of the group of shooting devices needs to ensure that: when the user is in any position in the store, the user is located in the shooting range of at least two shooting devices, and the requirement can be simply embodied as follows: any point on a horizontal plane 2 meters high in the store is within the shooting range of at least two shooting devices, so that the spatial position of the user at each moment can be accurately determined.
In this implementation, we assume that the overhead height of the user is 2 meters (there is a certain margin compared to the actual situation), then the overlap ratio of two adjacent shooting devices should satisfy the condition: at a level of approximately 2 meters height, the shooting ranges of the two shooting devices have a degree of coincidence of 50%. In this case, at least 50% of the photographing ranges of the adjacent two photographing apparatuses at the head height of the user are overlapped.
Since the user is always within the shooting range of at least two shooting devices, the user can be spatially positioned using the principle of binocular shooting devices. Specifically, the key points of the user in each image may be detected by an algorithm such as openpos, and then the spatial position of each key point may be determined according to the principle of binocular shooting equipment.
In another alternative implementation, the setting of the group of shooting devices only needs to ensure that: the shooting ranges of two adjacent shooting devices are only required to have a certain coincidence degree, and are not limited to be more than 50%, and the position tracking of the user can be kept when the user enters the shooting range of another shooting device from the shooting range of one shooting device.
Specifically, in an area only in the shooting range of one shooting device, the key points of the user can be calculated through algorithms such as openposition and the like, the position tracking of the key points is performed through video streams, and when the user enters the area in the shooting ranges of two shooting devices, the key points of the user can be corresponded in images shot by the two shooting devices through a binocular shooting device principle, so that the position tracking of a transition area is realized.
Those skilled in the art will appreciate that the number, position, etc. of the photographing devices may be adjusted according to actual needs. For example, when the store is circular, a plurality of photographing apparatuses may surround the center of the store. For another example, parameters such as the number of shooting devices and the degree of coincidence of shooting ranges may be adjusted, and the more the number or the higher the degree of coincidence, the more accurate the user is positioned, but the larger the calculation amount is, and the less the number or the lower the degree of coincidence, the faster the calculation is, but the result may not be accurate enough.
When the user is tracked, the user can be tracked by using any existing video processing mode, which is not limited in the embodiment of the present invention.
If the position of the user is tracked, the movement track of the user in the store can be determined, the user entering the store and the user leaving the store can be associated, and functions such as settlement and path playback can be realized for the user.
In order to better track the user, an identity detection device may also be utilized to detect the identity of the user when the user enters the store. Specifically, the identity detection device may detect feature identification information of the user and send the feature identification information to the server, and the server may track the position of the user according to the obtained feature identification information.
As shown in fig. 1, the identity detection device may be provided at an entrance gate of a store entrance, and may include at least one of: the acquisition device is used for shooting a face image of a user and sending the face image to the server; the two-dimensional code scanning device is used for scanning the two-dimensional code presented by the user and sending a scanning result to the server; the RFID reading device is used for reading the RFID label of the user terminal and sending the reading result to the server; and so on.
Correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code, the reading result of the RFID tag, and the like.
The acquisition device, the two-dimensional code scanning device and the RFID reading device can be directly communicated with a server or communicated with the entrance gate, and the entrance gate sends the identity detection information acquired by the devices to the server.
After the server acquires the identity detection information, the identity information of the user can be determined through the identity detection information, and the identity information can be any information capable of indicating the identity of the user, such as the name, the mobile phone number, the identification number, the user account number, the bank card number and the like of the user.
Specifically, when the user passes through the entrance gate, the user can shoot the image facing the acquisition device, the acquisition device sends the shot human face information to the server, and after the server acquires the human face information, the server identifies the human face to confirm the identity of the user, and then controls the entrance gate to be opened to allow the user to enter the store.
Or when the user passes through the entrance gate, the two-dimensional code corresponding to the account of the user can be found from the user terminal such as a mobile phone, the two-dimensional code is displayed, the two-dimensional code scanning device scans the two-dimensional code and then sends the scanning result to the server, the server determines the account of the user according to the scanning result of the two-dimensional code, and then the entrance gate is controlled to be opened to allow the user to enter the store.
Or when the user passes through the entrance gate, an article with an RFID tag, such as a mobile phone or a shopping card, can be close to the RFID reading device, the RFID reading device reads a result and then sends the result to the server, and the server can identify the identity of the user and open the gate to allow the user to enter.
In other alternative embodiments, the user may also be allowed to input identity information by himself, for example, an input device may be provided at the entry gate, the user may input his account and password on the input device, the input device sends the account and password of the user to the server, and the server opens the gate when determining that the account and password of the user are correct.
The aforementioned shooting range of the shooting device group may include the above-mentioned identity detection device, and when the user confirms the identity at the entry gate, the server may determine the location of the user according to the picture shot by the shooting device group, so as to correspond the identity information to the location of the user, and then perform location tracking on the user according to the identity information of the user and the video stream shot by the shooting device group, so as to obtain the movement track of the user.
Specifically, the server determines the track of the user according to the identity information of the user and the video stream shot by the shooting device group, which may be that the server determines the spatial position of the key point at each moment and the identity information of the user to which each key point belongs according to the identity information of the user and the video stream shot by the shooting device group.
Wherein the key points may include one or more of: nose, middle of shoulder, right elbow, right hand, left shoulder, left elbow, left hand, right hip, right knee, right foot, left hip, left knee, left foot, etc. Optionally, when the key points of the user are identified, the position of the hand can be identified at least, so that the identification of the commodity taken by the hand is facilitated.
The user track determining system can determine the moving track of the user, can track the position of the user through the moving track, can identify the commodity taken by the user in the moving track based on the moving track in the shopping scene of the unmanned retail store, and can realize commodity purchase and settlement so as to achieve the aim of unmanned calculation. Therefore, an embodiment of the present invention further provides a control system, which may include: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for determining the movement track of the user according to the acquired video stream shot by the shooting equipment group; and identifying the commodities taken by the user in the moving track, and adding the identification information of the commodities obtained by identification to the shopping list of the user.
Specifically, the server may identify the commodity taken by the user through the shooting device in the shooting device group, or may also identify the commodity taken by the user through other manners.
For example, a shelf shooting device may be disposed on a shelf on which the product is placed, the shelf shooting device may be capable of shooting a user and sending a shot video stream to a server, and the server may identify the identification information of the product according to the video stream shot by the shelf shooting device. One or more shelves can be arranged in the store, each shelf can be provided with a shelf shooting device, and the one or more shelf shooting devices can respectively communicate with the server. When the user takes the goods, the user and the goods appear in the images shot by the shelf shooting equipment, so that the server can determine the identification information of the goods taken by the user according to the images.
The identification information may be any information capable of identifying the commodity, for example, barcode information of the commodity or SKU (Stock Keeping Unit) information of the commodity.
During shopping, the server can identify the commodity taken by the user by processing the images in the video stream. Optionally, each frame of image in the video stream may be processed, or a part of image may be extracted from the video stream and processed. For example, 10 frames of images may be extracted every 1 second for processing, the identification information of the product in each frame of image is determined, and then the identification information of the product corresponding to the 1 second is determined according to the processing result of the 10 frames of images, for example, according to the weighted voting result of each frame of image.
There may be various implementation methods for determining the identification information of the commodity taken by the user according to the image. Optionally, the position information and the identification information of each commodity in the image and the position information of the user hand may be determined, and the identification information of the commodity taken by the user may be determined according to the position information of the commodity and the position information of the user hand in the image.
Alternatively, the position information of the hand in the image may be recognized first, and the product in a certain range near the hand may be searched for according to the position information of the hand, and if no product exists in the certain range, it is considered that the user does not take any product in the hand, and if a product exists, the product may be recognized, and the identification information of the product may be determined.
In addition, the control system can also comprise identity detection equipment which is used for detecting the characteristic identification information of the user and sending the characteristic identification information to the server;
the server specifically identifies the user according to the acquired feature identification information of the user, and determines the movement track of the user according to the video stream shot by the shooting equipment group.
The specific implementation of the identity detection device can be referred to above, and is not described herein again.
The identity detection device may be provided at an entrance gate of a store. When a user enters a store, firstly, the server can confirm the identity information of the user through an entrance gate, the shooting equipment group can continuously shoot the user and send shot video streams to the server, the server can carry out global tracking on the user according to the video streams, identify commodities taken by the user and add identification information of the commodities to a shopping list of the user.
For example, all the commodities in the image may be recognized first, the position information of each commodity and the identification information of each commodity are obtained, then the position information of the hand of the user is determined, if the position of a certain commodity coincides with the position of the hand of the user or the distance between the certain commodity and the hand of the user is smaller than a certain value, the commodity is considered to be the commodity taken by the user, and correspondingly, the identification information corresponding to the commodity is the identification information of the commodity taken by the user.
According to the method, the commodity can be corresponding to the hand, which commodity is held by the hand is determined, the hand can be corresponding to the identity information of the user according to the position tracking of the user, and which identity information corresponds to which hand is determined, so that the identification information of the commodity can be corresponding to the identity information, and which commodity is taken by the user can be judged.
After the identification information of the commodity taken by the user is determined, the identification information of the commodity can be added to a shopping list corresponding to the user. Every time one commodity is taken, the shopping list has one more commodity identification information.
An exit gate may also be provided at the store exit. Correspondingly, the server may be further configured to: the user's shopping list is settled as the user passes through an exit gate. Specifically, the server may perform payment settlement according to identification information of goods in the shopping list.
Optionally, when the user passes through the exit gate, the exit gate may send out-store information to the server, where the out-store information indicates that the user is going to exit or has already exited the store, the out-store information may include identity information of the user, and the exit gate may specifically determine the identity information of the user by scanning a two-dimensional code presented by the user, performing face recognition on the user, and the like. After the server acquires the store information, the server can settle the commodity purchased by the user.
Of course, the server may also know that the user goes out of the store by other means, for example, to track the location of the user, and if it is detected that the user goes out of the exit gate, it is determined that the user goes out of the store, and the user may settle the purchased goods.
To sum up, the user trajectory determination system provided by the embodiment of the present invention may include an identity detection device, a shooting device group, a server, and the like, where the server may identify identity information of a user according to the identity detection device, and correspond the identity information of the user to a spatial position of the user, the shooting device group includes a plurality of shooting devices, and a shooting range of each shooting device at least partially coincides with a shooting range of at least one shooting device, so that the server may track a position of the user according to video streams shot by the plurality of videos, determine a movement trajectory of the user, know a behavior of the user in a shopping process, provide a basis for subsequent settlement, and the user does not perceive in the whole tracking process, thereby improving shopping efficiency and user experience of the user.
In one or more embodiments, the determination of the movement track of the user is implemented by capturing video streams of the user behavior according to the capturing device group, and the spatial position of the user can be determined by capturing image frames at the same time in each video stream captured by the capturing device group. Thus, as yet another embodiment, the present invention also provides a user positioning system, which may include: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot images to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for determining the spatial position of the user according to the acquired images shot by the shooting equipment group.
The determination of the spatial position of the user according to the acquired image shot by the shooting device group can be specifically realized by adopting a binocular positioning principle, and the determination of the spatial position of the user is the same as the determination of the spatial position of the user based on the image frame at the same moment in the video stream shot by the shooting device group in the user trajectory determination system, and will not be repeated herein.
Optionally, the user positioning system may further include:
the identity detection equipment is used for detecting the characteristic identification information of the user and sending the characteristic identification information to the server;
the server specifically identifies the user according to the acquired feature identification information of the user, and determines the spatial position of the user according to the image shot by the shooting equipment group.
The identity detection device comprises at least one of:
the acquisition device is used for shooting a face image of a user and sending the face image to the server;
the two-dimensional code scanning device is used for scanning the two-dimensional code presented by the user and sending a scanning result to the server;
the RFID reading device is used for reading the RFID label of the user terminal and sending the reading result to the server;
correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code and the reading result of the RFID label.
The specific description of the identity detection device may refer to the above description, and is not repeated herein.
The following describes implementation of the method provided in the embodiment of the present invention with reference to the following method embodiment and accompanying drawings. In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited. The method in the embodiments of the present invention can be implemented based on the system described above.
Fig. 4 is a schematic flowchart of a first embodiment of a user trajectory determination method according to an embodiment of the present invention. The execution subject of the method in the embodiment of the present invention may be the server in the foregoing embodiment. As shown in fig. 4, the user trajectory determining method in this embodiment may include:
step 401, acquiring a video stream of a user shot by a plurality of shooting devices in a shooting device group.
Step 402, determining the movement track of the user according to the video streams shot by the plurality of shooting devices.
Wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
For an application scenario and a specific implementation process of the method provided by the embodiment of the present invention, reference may be made to the foregoing embodiment, which is not described herein again. How to determine the movement locus of the user from the video streams photographed by the plurality of photographing apparatuses is described in detail below.
Optionally, determining the movement track of the user according to the video streams shot by the multiple shooting devices may include: extracting images at the same moment from each video stream, and determining the spatial position of the key point of the user at the moment according to the extracted images; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
Specifically, before the position of the user is tracked by the plurality of photographing apparatuses, the plurality of photographing apparatuses in the photographing apparatus group may be calibrated in advance. Optionally, external parameter calibration of the shooting devices may be implemented by a checkerboard method or the like, a physical coordinate system of the shooting devices is constructed, and parameters such as the position and the angle of each shooting device in the shooting device group are determined. After calibration, picture time synchronization may be performed for each photographing apparatus.
After the shooting device is calibrated and synchronized, the position of the user can be tracked through the video stream shot by the shooting device to determine the moving track.
The extracting of images from each video stream at the same time may be, from the extracted multiple images, determining spatial positions of key points of the user at the time:
and extracting images at the same moment from each video stream, and determining the spatial position of the key point of the user at the moment relative to the physical coordinate system of the shooting equipment according to the extracted images.
For example, the image taken by each photographing apparatus at the time T0 may be found, and assuming that there are N photographing apparatuses, there are N images at the time T0, from which the spatial position of the user at the time T0 may be determined. Similarly, for the time T1, N images taken by N photographing apparatuses at the time T1 can be found, and the spatial position of the user at the time T1 can be determined by the N images.
By analogy, the spatial positions of the users at the time points of the period in the store can be determined, and then the spatial positions of the users at the time points are connected in series to obtain the movement track of the users in the store.
Optionally, determining the spatial position of the key point of the user at the time according to the extracted multiple images may include: calculating the epipolar line where the key point is located according to each image in the plurality of images at the moment; and determining the spatial position of the key point of the user at the moment according to the epipolar line where the key point is obtained by calculating each image.
After the key points in each shooting device are detected, the key point matching results of the same user of different shooting devices can be obtained according to the external parameters of the shooting devices and the epipolar line matching mode, and therefore the spatial positions of the key points are rebuilt.
Assuming that there are two photographing devices, the photographed images are referred to as image a and image B, respectively, and a certain key point of a certain user appears in both images at the same time. The positions of the key points in the image A and the image B can be obtained through an OpenPose algorithm and the like, and the spatial positions of the key points can be determined through epipolar line matching.
The principle of epipolar matching is described below. For a point in space, if the position of the point in an image is determined, the straight line where the point is located can be restored only through the image, but the specific spatial position of the point cannot be determined.
It can be understood that, assuming that a point 1 and a point 2 are located in space and the point 1 and the point 2 are connected to form a straight line, if the straight line is perpendicular to the photographing apparatus, only the point 1 located closer to the photographing apparatus is visible in the image photographed by the photographing apparatus, and the point 2 located farther from the photographing apparatus is not visible.
From the above analysis, it can be seen that an image taken by one photographing apparatus can determine only a straight line on which a point is located, but cannot determine which position on the straight line the point is specifically located. At this time, if another photographing apparatus also photographs the point, the spatial position of the point can be determined.
Optionally, determining the spatial position of the user's key point according to the epipolar line where the user's key point is located, which is obtained by calculating through each image, includes: determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; and determining the spatial position of the key point of the user according to the intersection point.
In particular, assuming that a plurality of epipolar lines where a key point is located can be calculated from a plurality of images, the intersection position of these epipolar lines is the position of the key point.
In general, one key point appears only in two photographing apparatuses whose photographing ranges overlap. According to the images shot by the two shooting devices, the epipolar line where the key point is located can be found, and the intersection point of the two epipolar lines in the space is the key point. If the three shooting devices have the same overlapped part, the key point may appear in the images shot by the three shooting devices, and similarly, three straight lines where the key point is located can be obtained, and the intersection point position of the three straight lines is the position of the key point.
Still by way of example of the image a and the image B, as described above, assuming that a point appears in both the image a and the image B at a certain time, a straight line where the point is located may be determined according to the image a, another straight line where the point is located may be determined according to the image B, and an intersection point of the two straight lines is a spatial position of the point.
By the method, the spatial position of key points such as the head, the hand and the like of the user can be determined. The spatial position can be represented by the coordinates of xyz triaxial.
The straight lines where the key points of different users obtained by the two shooting devices are located generally have no intersection point, and if any, the positions of the key points can be determined according to the heights of the intersection points.
Specifically, determining the spatial position of the key point of the user according to the intersection point may include: if one polar line intersection point exists, the spatial position of the key point of the user is the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range.
For example, there are two users in a shooting range common to the shooting device a and the shooting device B. The shooting device a shoots hands (indicated by dots) P1 and P2 of two users, and according to the image shot by the shooting device a, straight lines where the hands P1 and P2 are located can be determined to be LA1 and LA2 respectively. The shooting device B also shoots the hands P1 and P2 of the two users, and the straight lines where P1 and P2 are located can be determined as LB1 and LB2 respectively according to the images shot by the shooting device B.
Then, the intersection of LA1 and LB1 is the first user's hand P1, and the intersection of LA2 and LB2 is the second user's hand P2. In general, LA1 and LB2 have no intersection, and LA2 and LB1 also have no intersection, and if any, it can be determined whether the hand is the user's hand according to the height of the intersection in space, for example, the height of the intersection is 3 meters, and the point is obviously not the hand.
According to the embodiment of the invention, a binocular shooting equipment network system can be constructed through common shooting equipment, and the spatial position and the movement track of a user can be positioned in real time through key point matching of the user. At present, a scheme for performing space positioning and track determination based on a depth shooting device exists, but the hardware of the depth shooting device has limitations, such as instability of USB data transmission, limited shop decoration wiring caused by USB transmission, limited effective distance range of depth, and the like.
The key point detection method used in the embodiment of the invention can also be any other deep learning detection algorithm, such as Mask-RCNN and the like. The trajectory tracking algorithm used in the present invention may be any multi-target tracking algorithm. The calibration method used in the present invention may be any other calibration method.
In summary, the user trajectory determining method provided in the embodiment of the present invention may obtain video streams of a plurality of shooting devices in a shooting device group shooting a user, and determine a movement trajectory of the user according to the video streams shot by the plurality of shooting devices, where each shooting device in the shooting device group at least partially coincides with a shooting range of at least one shooting device, so that a user can be quickly and accurately spatially positioned and determined by using a common shooting device, the cost is low, and the method is not limited by an effective distance range of a depth shooting device, and can be used in any type of store.
Fig. 5 is a flowchart illustrating a second embodiment of a user trajectory determination method according to an embodiment of the present invention. As shown in fig. 5, the method in this embodiment may include:
step 501, acquiring feature identification information detected by identity detection equipment;
step 502, determining identity information of a user according to the feature identification information;
step 503, acquiring video streams of a plurality of shooting devices in the shooting device group shooting a user;
and 504, identifying the user according to the identity information of the user, and determining the movement track of the user according to the video stream shot by the shooting equipment group.
By the method described in the above embodiment, the position information of the key point in the space at each time can be determined, so that the continuous position tracking of the same user can be realized. Furthermore, the user can be tracked by combining the identity information of the user, and the corresponding relation between the identity information of the user and the positions of the key points at all times is determined.
Optionally, the obtaining of the feature identification information detected by the identity detection device when the user enters the store may include: acquiring a face image of a user shot by an acquisition device; and/or acquiring a scanning result obtained by scanning the two-dimensional code presented by the user by the two-dimensional code scanning device; and/or obtaining a reading result obtained by reading the RFID label of the user terminal by the RFID reading device; correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code and the reading result of the RFID label.
Optionally, the method may further include: the method comprises the steps of obtaining a playback request sent by display equipment of a store, wherein the playback request comprises identity information of a user; determining the movement track of the user in the store according to the identity information of the user; and sending the movement track of the user to a display device of a store, so that the display device displays the movement track of the user.
In particular, a display device may be provided within the store through which the user may review his shopping experience. The display device can determine the identity information of the user in various ways, such as face recognition or card swiping, and the like, the server can send the movement track of the user to the display device, and the movement track is displayed or played to the user by the display device, so that the user can know the shopping process of the user in a store conveniently, and convenience is provided for the user.
In addition, in the shopping scene of the unmanned retail store, the commodities taken by the user in the movement track can be identified based on the movement track, and the purposes of commodity purchase adding and settlement are realized, so that the unmanned calculation is realized. Therefore, an embodiment of the present invention further provides a control method, and as shown in fig. 6, the method may include:
step 601: a plurality of shooting devices in a shooting device group shoot video streams of a user.
Step 602: and determining the movement track of the user according to the video streams shot by the plurality of shooting devices.
Wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device; identifying the commodities taken by the user in the moving track;
the operations in step 601 to step 602 may be performed in steps 401 to 402, which are not described herein again.
Step 603: and adding the identification information of the identified commodity into the shopping list of the user according to the identity information of the user.
Optionally, the method may further include: and when the user is detected to leave the store, the shopping list of the user is settled.
In addition to the movement trajectory, the server may transmit identification information of the items in the shopping list of the user and information such as the time and the position of adding each item to the display device, and the information is displayed to the user by the display device. In other implementations, the server may also send the information to the user terminal for display by the user terminal to the user.
Fig. 7 is a schematic diagram for displaying a movement track of a user according to an embodiment of the present invention. As shown in fig. 7, the movement locus of the user is indicated by a dotted line. The rectangle represents a shelf, and the time, location and merchandise information when the user picks up merchandise in front of the shelf may also be displayed to the user. For example, the location where the user takes the goods may be marked, and the time when the goods are taken and the identification information of the goods may be displayed to the user, such as "11:30 take article a "," 11:42 pick up item B ", etc.
In conclusion, according to the embodiment of the invention, the position of the user can be tracked to determine the movement track of the user in the process of purchasing the commodity by the user, the commodity information taken by the user in the movement track can be determined, the shopping settlement efficiency of the user is improved, the user does not feel in the whole process, and the user experience degree is effectively improved.
Fig. 8 is a flowchart illustrating a first embodiment of a user positioning method according to an embodiment of the present invention. As shown in fig. 8, the method in this embodiment may include:
step 801: images obtained by shooting user behaviors by a plurality of shooting devices in the shooting device group are acquired.
Step 802: and determining the spatial position of the user according to the images shot by the plurality of shooting devices.
Wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
Alternatively, determining the spatial position of the user from the images captured by the plurality of capturing devices may include:
and determining the spatial position of the key point of the user according to a plurality of images shot by the plurality of shooting devices at the same moment.
Alternatively, determining the spatial position of the key point of the user according to a plurality of images taken by the plurality of photographing devices at the same time may include:
calculating the epipolar line where the key point is located according to each image in the plurality of images at the moment;
and determining the spatial position of the key point of the user at the moment according to the epipolar line where the key point is obtained by calculating each image.
Optionally, determining the spatial position of the key point of the user at the time according to the epipolar line where the key point is calculated from each image may include:
determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image;
and determining the spatial position of the key point of the user according to the intersection point.
Optionally, determining the spatial position of the key point of the user according to the intersection point includes:
if one polar line intersection point exists, the spatial position of the key point of the user is the spatial position of the polar line intersection point;
if the polar lines have a plurality of intersection points, searching the intersection points in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range.
Optionally, the method may further include:
calibrating the shooting equipment to the shooting equipment group, and constructing a physical coordinate system of the shooting equipment;
performing picture time synchronization on each shooting device in the shooting device group;
the spatial position of the user may particularly refer to the position coordinates in the physical coordinate system of the capturing device.
Further, the method may further include:
acquiring feature identification information detected by identity detection equipment;
determining the identity information of the user according to the feature identification information;
correspondingly, determining the spatial position of the user according to the video streams shot by the plurality of shooting devices comprises:
and identifying the user according to the identity information of the user, and determining the spatial position of the user according to the images shot by the plurality of shooting devices.
The user trajectory determination device of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these user trajectory determination means can be constructed by configuring them by the steps taught in the present scheme using commercially available hardware components.
Fig. 9 is a schematic structural diagram of an embodiment of a user trajectory determination apparatus according to an embodiment of the present invention. As shown in fig. 9, the apparatus may include:
a first obtaining module 11, configured to obtain video streams of a user shot by multiple shooting devices in a shooting device group;
a track determining module 12, configured to determine a moving track of the user according to the video streams captured by the multiple capturing devices;
wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device.
Optionally, the trajectory determination module 12 may be specifically configured to: extracting images at the same moment from each video stream, and determining the spatial position of a key point of a user at the moment according to the extracted images; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
Optionally, the trajectory determination module 12 may be specifically configured to: extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment; determining the spatial position of the key point of the user at the moment according to the polar line where the key point is calculated and obtained through each image; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
Optionally, the trajectory determination module 12 may be specifically configured to: extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; determining the spatial position of the key point of the user according to the intersection point; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
Optionally, the trajectory determination module 12 may be specifically configured to: extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if the polar line has one intersection point, the spatial position of the key point of the user is the spatial position of the intersection point, if the polar line has a plurality of intersection points, the intersection point in the height range is searched according to the height range corresponding to the key point, and the spatial position of the key point of the user is the spatial position of the intersection point in the height range; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
Optionally, the apparatus may further include:
the shooting equipment calibration module is used for calibrating the shooting equipment to the shooting equipment group and constructing a physical coordinate system of the shooting equipment; performing picture time synchronization on each shooting device in the shooting device group;
the trajectory determining module extracts images at the same time from each video stream, determines the spatial position of the key point of the user at the time according to the extracted images, specifically, extracts images at the same time from each video stream, and determines the position coordinate of the key point of the user at the time relative to the physical coordinate system of the shooting device according to the extracted images.
Optionally, the first obtaining module 11 may be further configured to: acquiring feature identification information detected by identity detection equipment; determining the identity information of the user according to the feature identification information; accordingly, the trajectory determination module 12 may be specifically configured to: and identifying the user according to the identity information of the user, and determining the movement track of the user according to the video stream shot by the shooting equipment group.
Optionally, the first obtaining module 11 may be further configured to: the method comprises the steps of obtaining a playback request sent by display equipment of a store, wherein the playback request comprises identity information of a user; determining the movement track of the user in the store according to the identity information of the user; and sending the movement track of the user to a display device of a store, so that the display device displays the movement track of the user.
Optionally, the first obtaining module 11 may specifically be configured to: acquiring a face image of a user shot by a collecting device, and/or acquiring a scanning result obtained by a two-dimensional code scanning device scanning a two-dimensional code shown by the user, and/or acquiring a reading result obtained by an RFID reading device reading an RFID label of a user terminal; correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code and the reading result of the RFID label.
The apparatus shown in fig. 9 can execute the user trajectory determination method provided in the first to second embodiments of the foregoing method, and reference may be made to the related description of the foregoing embodiments for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
Fig. 10 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present invention. The electronic device may be any electronic device with a video processing function, such as a server. As shown in fig. 10, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 is used for storing a program for supporting an electronic device to execute the user trajectory determination method provided by any one of the preceding embodiments, and the processor 21 is configured to execute the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices;
wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device.
Optionally, the processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1 to 6.
The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processor, the memory, and the like may be basic server resources rented or purchased from the cloud computing platform.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by a processor, cause the processor to perform acts comprising:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices;
wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
The computer instructions, when executed by a processor, may further cause the processor to perform all or part of the steps involved in the user trajectory determination method in the above embodiments.
Fig. 11 is a schematic structural diagram of a first embodiment of a control device according to an embodiment of the present invention, and as shown in fig. 11, the control device may include:
a first obtaining module 13, configured to obtain video streams of a user shot by multiple shooting devices in a shooting device group;
a track determining module 14, configured to determine a moving track of the user according to the video streams captured by the multiple capturing devices; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
The identification module 15 is configured to identify the commodity taken by the user in the moving track;
and the control module 16 is configured to add the identification information of the identified commodity to the shopping list of the user according to the identity information of the user.
Optionally, the control module is further configured to identify a commodity taken by the user in the movement track;
and adding the identification information of the commodity obtained by identification into the shopping list of the user according to the identity information of the user.
The apparatus shown in fig. 11 may perform the user trajectory determination method provided in the foregoing method embodiment, and reference may be made to the relevant description of the foregoing embodiment for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
Fig. 12 is a schematic structural diagram of a second electronic device according to an embodiment of the present invention. The electronic device may be any electronic device with a video processing function, such as a server. As shown in fig. 12, the electronic device may include: a processor 24 and a memory 25. Wherein the memory 25 is used for storing a program for supporting an electronic device to execute the control method provided by any one of the foregoing embodiments, and the processor 24 is configured to execute the program stored in the memory 25.
The program comprises one or more computer instructions which, when executed by the processor 24, are capable of performing the steps of:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device;
identifying the commodities taken by the user in the moving track;
and adding the identification information of the commodity obtained by identification into the shopping list of the user according to the identity information of the user.
The electronic device may further include a communication interface 26 for communicating with other devices or a communication network.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processor, the memory, and the like may be basic server resources rented or purchased from the cloud computing platform.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by a processor, cause the processor to perform acts comprising:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
determining a movement track of the user according to the video streams shot by the plurality of shooting devices; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device;
identifying the commodities taken by the user in the moving track;
and adding the identification information of the identified commodity into the shopping list of the user according to the identity information of the user.
The computer instructions, when executed by a processor, may further cause the processor to perform all or part of the steps involved in the control method in the above embodiments.
Fig. 13 is a schematic structural diagram of a first embodiment of a user positioning apparatus according to an embodiment of the present invention, and as shown in fig. 13, the apparatus may include:
a second obtaining module 17, configured to obtain images obtained by shooting user behaviors by multiple shooting devices in the shooting device group;
a positioning module 18, configured to determine a spatial position of the user according to the images captured by the multiple capturing devices; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
The apparatus shown in fig. 13 may perform the user positioning method provided in the foregoing method embodiment, and reference may be made to the related description of the foregoing embodiment for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
Fig. 14 is a schematic structural diagram of a third electronic device according to an embodiment of the present invention. The electronic device may be any electronic device with a video processing function, such as a server. As shown in fig. 14, the electronic device may include: a processor 27 and a memory 28. Wherein the memory 28 is used for storing a program for supporting an electronic device to execute the control method provided by any one of the foregoing embodiments, and the processor 27 is configured to execute the program stored in the memory 28.
The program comprises one or more computer instructions which, when executed by the processor 27, are capable of performing the steps of:
acquiring images obtained by shooting user behaviors by a plurality of shooting devices in a shooting device group;
determining the spatial position of the user according to the images shot by the plurality of shooting devices; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
The electronic device may further include a communication interface 29 for communicating with other devices or a communication network.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processor, the memory, and the like may be basic server resources rented or purchased from the cloud computing platform.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by a processor, cause the processor to perform acts comprising:
determining the spatial position of the user according to the images shot by the plurality of shooting devices; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range portion of at least one photographing apparatus.
The computer instructions, when executed by a processor, may further cause the processor to perform all or part of the steps involved in the user positioning method in the above embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable network connection device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable network connection device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable network connection device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable network connection device to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (21)

1. A user trajectory determination system, comprising: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server;
the shooting equipment group comprises a plurality of shooting equipment, and each shooting equipment is at least partially overlapped with the shooting range of at least one shooting equipment;
the server is used for acquiring video streams of a plurality of shooting devices in the shooting device group shooting users; extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment; determining the intersection point of epipolar lines where the key points of the user are located, which is obtained through calculation of each image; if one intersection point of the polar lines exists, the spatial position of the key point of the user is the spatial position of the intersection point; if the polar lines have a plurality of intersection points, searching the intersection points positioned in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range; and determining the movement track of the user according to the spatial positions of the key points of the user at a plurality of moments.
2. The system of claim 1, wherein the set of cameras is disposed on a ceiling of a store, and wherein at least a portion of the cameras' coverage area comprises a shelf deployment area in the store.
3. The system of claim 1, further comprising:
the identity detection equipment is used for detecting the characteristic identification information of the user and sending the characteristic identification information to the server;
the server specifically identifies the user according to the acquired feature identification information of the user, and determines the movement track of the user according to the video stream shot by the shooting equipment group.
4. The user trajectory determination system of claim 3, wherein the identity detection device comprises at least one of:
the acquisition device is used for shooting a face image of a user and sending the face image to the server;
the two-dimensional code scanning device is used for scanning the two-dimensional code shown by the user and sending a scanning result to the server;
the RFID reading device is used for reading the RFID label of the user terminal and sending the reading result to the server;
correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code and the reading result of the RFID label.
5. A control system, comprising: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot video stream to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for acquiring video streams of a plurality of shooting devices in the shooting device group for shooting a user; extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if one polar line intersection point exists, the spatial position of the key point of the user is the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range; determining a movement track of a user according to the spatial positions of key points of the user at a plurality of moments, and identifying commodities taken by the user in the movement track; and adding the identification information of the identified commodity into the shopping list of the user.
6. The control system of claim 5, further comprising: an outlet gate;
correspondingly, the server is further configured to: the shopping list of the user is settled when the user passes through an exit gate.
7. A user location system, comprising: a shooting device group and a server;
the shooting equipment group is used for shooting the behaviors of the user and sending the shot images to the server;
the shooting device group comprises a plurality of shooting devices, and each shooting device at least partially coincides with the shooting range of at least one shooting device;
the server is used for calculating an epipolar line where a key point is located according to a plurality of images shot by the shooting devices at the same moment and aiming at each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if one polar line intersection point exists, determining the spatial position of the key point of the user as the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points located in the height range according to the height range corresponding to the key points, and determining the spatial position of the key points of the user as the spatial position of the intersection points in the height range.
8. A user trajectory determination method, comprising:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
extracting images at the same moment from each video stream, and determining the spatial position of a key point of a user at the moment according to the extracted images;
determining a movement track of a user according to the spatial positions of key points of the user at a plurality of moments; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device;
the extracting images at the same time from each video stream, and determining the spatial position of the key point of the user at the time according to the extracted images comprises:
extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment;
determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image;
if one polar line intersection point exists, the spatial position of the key point of the user is the spatial position of the polar line intersection point;
if the polar lines have a plurality of intersection points, searching the intersection points in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range.
9. The method of claim 8, further comprising:
calibrating the shooting equipment to the shooting equipment group, and constructing a physical coordinate system of the shooting equipment;
performing picture time synchronization on each shooting device in the shooting device group;
the extracting images of the same moment from each video stream, and determining the spatial position of the key point of the user at the moment according to the extracted images comprises:
and extracting images at the same moment from each video stream, and determining the position coordinates of the key points of the user at the moment relative to the physical coordinate system of the shooting equipment according to the extracted images.
10. The method of claim 8, further comprising:
acquiring feature identification information detected by identity detection equipment;
determining the identity information of the user according to the feature identification information;
correspondingly, determining the movement track of the user according to the video streams shot by the plurality of shooting devices comprises:
and identifying the user according to the identity information of the user, and determining the movement track of the user according to the video stream shot by the shooting equipment group.
11. The method of claim 10, further comprising:
the method comprises the steps of obtaining a playback request sent by display equipment of a store, wherein the playback request comprises identity information of a user;
determining the movement track of the user in the store according to the identity information of the user;
and sending the movement track of the user to a display device of a store, so that the display device displays the movement track of the user.
12. The method of claim 10, wherein obtaining the feature recognition information detected by the identity detection device when the user enters the store comprises:
acquiring a face image of a user shot by an acquisition device;
and/or acquiring a scanning result obtained by scanning the two-dimensional code presented by the user by the two-dimensional code scanning device;
and/or obtaining a reading result obtained by reading the RFID label of the user terminal by the RFID reading device;
correspondingly, the feature identification information includes at least one of the following: the face image, the scanning result of the two-dimensional code and the reading result of the RFID label.
13. A control method, comprising:
acquiring video streams of a plurality of shooting devices in a shooting device group shooting a user;
extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to each image in the plurality of images at the moment;
determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image;
if one intersection point of the polar lines exists, the spatial position of the key point of the user is the spatial position of the intersection point;
if the polar lines have a plurality of intersection points, searching the intersection points in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range;
determining a movement track of a user according to the spatial positions of key points of the user at a plurality of moments;
identifying the commodities taken by the user in the moving track;
according to the identity information of the user, adding the identification information of the commodity obtained by identification into a shopping list of the user;
wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device.
14. The method of claim 13, further comprising:
and when the user is detected to leave the store, the shopping list of the user is settled.
15. A method for locating a user, comprising:
acquiring images obtained by shooting user behaviors by a plurality of shooting devices in a shooting device group;
calculating epipolar lines where key points are located according to a plurality of images shot by the shooting devices at the same moment and aiming at each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if one polar line intersection point exists, determining the spatial position of the key point of the user as the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points located in the height range according to the height range corresponding to the key points, and determining the spatial position of the key points of the user as the spatial position of the intersection points in the height range; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
16. A user trajectory determination device, comprising:
the first acquisition module is used for acquiring video streams of a plurality of shooting devices in the shooting device group shooting a user;
the track determining module is used for extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to the images aiming at each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if one intersection point of the polar lines exists, the spatial position of the key point of the user is the spatial position of the intersection point; if the polar lines have a plurality of intersection points, searching the intersection points positioned in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range; determining a movement track of a user according to the spatial positions of key points of the user at a plurality of moments;
wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
17. A control device, comprising:
the first acquisition module is used for acquiring video streams of a plurality of shooting devices in the shooting device group shooting a user;
the trajectory determining module is used for extracting images at the same moment from each video stream, and calculating epipolar lines where key points are located according to the images aiming at each image in the plurality of images at the moment; determining the intersection point of polar lines where the key points of the user are located, wherein the intersection point is obtained through calculation of each image; if one polar line intersection point exists, the spatial position of the key point of the user is the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points positioned in the height range according to the height range corresponding to the key points, wherein the spatial position of the key points of the user is the spatial position of the intersection points in the height range; determining a movement track of a user according to spatial positions of key points of the user at a plurality of moments; wherein each photographing device in the group of photographing devices at least partially coincides with a photographing range of at least one photographing device;
the identification module is used for identifying the commodities taken by the user in the moving track;
and the control module is used for adding the identification information of the identified commodities to the shopping list of the user according to the identity information of the user.
18. A user positioning device, comprising:
the second acquisition module is used for acquiring images obtained by shooting user behaviors by a plurality of shooting devices in the shooting device group;
the positioning module is used for calculating an epipolar line where a key point is located according to a plurality of images shot by the plurality of shooting devices at the same moment and aiming at each image in the plurality of images at the moment; determining the intersection point of epipolar lines where the key points of the user are located, which is obtained through calculation of each image; if one polar line intersection point exists, determining the spatial position of the key point of the user as the spatial position of the polar line intersection point; if the polar lines have a plurality of intersection points, searching the intersection points located in the height range according to the height range corresponding to the key points, and determining the spatial position of the key points of the user as the spatial position of the intersection points in the height range; wherein each photographing apparatus in the group of photographing apparatuses at least partially coincides with a photographing range of at least one photographing apparatus.
19. An electronic device, comprising: a memory and a processor; the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the user trajectory determination method of any one of claims 8 to 12.
20. An electronic device, comprising: a memory and a processor; the memory is for storing one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the control method of claim 13 or 14.
21. An electronic device, comprising: a memory and a processor; the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the user location method of claim 15.
CN201910019396.7A 2019-01-09 2019-01-09 User track determination system, method, device and server Active CN111429194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910019396.7A CN111429194B (en) 2019-01-09 2019-01-09 User track determination system, method, device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910019396.7A CN111429194B (en) 2019-01-09 2019-01-09 User track determination system, method, device and server

Publications (2)

Publication Number Publication Date
CN111429194A CN111429194A (en) 2020-07-17
CN111429194B true CN111429194B (en) 2023-04-07

Family

ID=71546030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910019396.7A Active CN111429194B (en) 2019-01-09 2019-01-09 User track determination system, method, device and server

Country Status (1)

Country Link
CN (1) CN111429194B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937241B (en) * 2022-06-01 2024-03-26 北京凯利时科技有限公司 Transition zone-based passenger flow statistics method and system and computer program product
CN117455595A (en) * 2023-11-07 2024-01-26 浙江云伙计科技有限公司 Visual AI-based unmanned intelligent on-duty method and system
CN117710067A (en) * 2024-02-05 2024-03-15 成都工业职业技术学院 Edge computing method, device, equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
WO2016026455A1 (en) * 2014-08-22 2016-02-25 努比亚技术有限公司 Method and device for automatically optimizing star trail photography result
CN108921098A (en) * 2018-07-03 2018-11-30 百度在线网络技术(北京)有限公司 Human motion analysis method, apparatus, equipment and storage medium
CN109165559A (en) * 2018-07-26 2019-01-08 高新兴科技集团股份有限公司 A kind of method and apparatus generating track

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999939A (en) * 2012-09-21 2013-03-27 魏益群 Coordinate acquisition device, real-time three-dimensional reconstruction system, real-time three-dimensional reconstruction method and three-dimensional interactive equipment
WO2016026455A1 (en) * 2014-08-22 2016-02-25 努比亚技术有限公司 Method and device for automatically optimizing star trail photography result
CN108921098A (en) * 2018-07-03 2018-11-30 百度在线网络技术(北京)有限公司 Human motion analysis method, apparatus, equipment and storage medium
CN109165559A (en) * 2018-07-26 2019-01-08 高新兴科技集团股份有限公司 A kind of method and apparatus generating track

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Xudong Chen等.System integration of a vision-guided UAV for autonomous landing on moving platform.《IEEE Xplore》.2016,全文. *
张恒康 ; 何玉明 ; 张耿耿 ; 朱杰兵 ; 杨文俊 ; .一种单相机测量三维运动轨迹的方法.固体力学学报.2010,(S1),全文. *

Also Published As

Publication number Publication date
CN111429194A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
US11393173B2 (en) Mobile augmented reality system
KR102454854B1 (en) Item detection system and method based on image monitoring
US20090160975A1 (en) Methods and Apparatus for Improved Image Processing to Provide Retroactive Image Focusing and Improved Depth of Field in Retail Imaging Systems
CN111429194B (en) User track determination system, method, device and server
CN110033293B (en) Method, device and system for acquiring user information
CN108921098B (en) Human motion analysis method, device, equipment and storage medium
CN112464697B (en) Visual and gravity sensing based commodity and customer matching method and device
JP2023015989A (en) Item identification and tracking system
CN104246793A (en) Three-dimensional face recognition for mobile devices
CN110335291A (en) Personage's method for tracing and terminal
EP3901841A1 (en) Settlement method, apparatus, and system
CN115249356B (en) Identification method, device, equipment and storage medium
CN110555876A (en) Method and apparatus for determining position
CN111428743B (en) Commodity identification method, commodity processing device and electronic equipment
KR102540744B1 (en) Apparatus and method for generating 3d coordinates of a person and examining the validity of 3d coordinates
JP6077425B2 (en) Video management apparatus and program
JP6176563B2 (en) Product sales equipment
JP2019096062A (en) Object tracking device, object tracking method, and object tracking program
KR102250712B1 (en) Electronic apparatus and control method thereof
RU2679200C1 (en) Data from the video camera displaying method and system
KR102540745B1 (en) Apparatus and method for operating sores based on vision recognition
CN116188538A (en) Behavior track tracking method for multiple cameras
KR101618308B1 (en) Panoramic image acquisition and Object Detection system for Product of Interactive Online Store based Mirror World.
CN112291701B (en) Positioning verification method, positioning verification device, robot, external equipment and storage medium
Ecklbauer A mobile positioning system for android based on visual markers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant