CN110765984A - Mobile state information display method, device, equipment and storage medium - Google Patents
Mobile state information display method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110765984A CN110765984A CN201911090206.7A CN201911090206A CN110765984A CN 110765984 A CN110765984 A CN 110765984A CN 201911090206 A CN201911090206 A CN 201911090206A CN 110765984 A CN110765984 A CN 110765984A
- Authority
- CN
- China
- Prior art keywords
- data
- target user
- camera device
- movement
- movement data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000000694 effects Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000005021 gait Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000010425 computer drawing Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000037230 mobility Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000256 polyoxyethylene sorbitan monolaurate Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Traffic Control Systems (AREA)
Abstract
The present disclosure provides a method, an apparatus, a device and a storage medium for displaying mobile state information, wherein the method comprises the following steps: acquiring collected data of a first camera device, wherein the collected data comprises a collected face image and identification information of the first camera device; acquiring position description information corresponding to the identification information of the first camera device; determining the movement data of the target user identified by the face image based on the position description information of the first camera device; and displaying the moving state information of the target user by using the moving data.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying mobile status information.
Background
When a host visits an event in a venue such as an exhibition or an exposition, the host often displays contents with different themes through a plurality of exhibition areas or halls. In the case of a large number of exhibition areas or exhibition halls in a meeting place, it is sometimes difficult for the participants to determine which exhibition areas or exhibition halls they want to visit, and a scheme for effectively guiding the participants is lacking.
Disclosure of Invention
In view of the above, the present disclosure provides at least one scheme for displaying mobile status information.
In a first aspect, the present disclosure provides a method for displaying mobile status information, including:
acquiring collected data of a first camera device, wherein the collected data comprises a collected face image and identification information of the first camera device;
acquiring position description information corresponding to the identification information of the first camera device;
determining the movement data of the target user identified by the face image based on the position description information of the first camera device;
and displaying the moving state information of the target user by using the moving data.
By the method, the movement data of the target user identified by the face image collected in the first camera device can be determined based on the position description information of the first camera device, and the movement state information of the target user is displayed by utilizing the movement data, so that route guidance can be provided for other users.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining the mobile data of the user identity identified by the face image based on the position description information of the first camera device comprises:
and under the condition that the historical movement data of the target user is obtained within a set time period before the acquisition time is detected, updating the historical movement data based on the position description information of the first camera device to obtain the movement data.
Wherein the movement data comprises the historical movement data and location description information of the first camera, and the historical movement data comprises location description information of at least one second camera;
the second camera device is a camera device which collects the face image of the target user in a set time period before the collection time.
The movement data obtained based on the above embodiment includes not only the description information of the current location of the target user (i.e., the location description information of the first camera device) but also the historical movement data, whereby information reflecting the history and the current movement state can be vividly presented.
In a possible embodiment, the presenting the movement state information of the target user by using the movement data includes:
and switching the displayed movement state information of the target user corresponding to the historical movement data into the movement state information of the target user corresponding to the movement data according to the set switching special effect.
Through the embodiment, the displayed movement state information of the target user can be updated.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining the mobile data of the user identity identified by the face image based on the position description information of the first camera device comprises:
and when the situation that the historical movement data of the target user is not obtained in a set time period before the acquisition time is detected, taking the position description information of the first camera device as the movement data.
In a possible embodiment, the presenting the movement state information of the target user by using the movement data includes:
and displaying the moving state information of the target user corresponding to the moving data according to the set display special effect.
In the above embodiment, the movement state information can be displayed according to the set display effect, and the interactivity with the user can be enhanced.
In one possible embodiment, the movement status information includes one or more of the following information:
the position identification of the area passed by the target user, the stay time of the target user in each area, and the number of times of the target user passing through the same area.
In one possible embodiment, after determining the movement data of the target user identified by the face image, the method further comprises:
acquiring mobile data of a plurality of users including the target user;
determining at least one target movement data by using the movement data of the plurality of users;
and displaying the description information of the target movement data by utilizing the at least one type of target movement data.
Based on the above embodiment, the target mobile data can be screened out from the mobile data of a plurality of users, and the description information of the target mobile data is displayed, so that the important display of part of the mobile data is realized, for example, the display introduction of hot spots is performed.
In a possible embodiment, the determining at least one target movement data by using the movement data of the plurality of users includes:
determining a plurality of types of movement data respectively representing different types of movement routes based on the movement data of the plurality of users;
determining heat information of each mobile data in the plurality of mobile data;
and determining the at least one target moving data based on the heat information of each moving data.
In one possible embodiment, the heat information includes the amount of movement data characterizing each movement route;
the determining the at least one target movement data based on the heat information of each movement data comprises:
and selecting at least one type of mobile data with the quantity larger than a set threshold value as the at least one type of mobile data based on the quantity of the mobile data representing each type of mobile route in the plurality of types of mobile data, or arranging the quantities respectively corresponding to the plurality of types of mobile data from large to small, and selecting the mobile data with the quantity arranged at the top N correspondingly as the at least one type of mobile data, wherein N is a positive integer.
In the above embodiment, the screened movement data is the movement data in which the number of persons passing through the corresponding movement route is large, and therefore the target movement data determined in this way is the movement data in which the number of persons passing through the corresponding movement route is large, and the display of the hot route can be realized.
In one possible implementation, the description information of the target movement data includes a movement route corresponding to the target movement data, the number of users who pass through the movement route, and a time length taken for passing through the movement route.
In a second aspect, the present disclosure provides a mobile status information display apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring acquisition data of a first camera device and transmitting the acquisition data to the second acquisition module, and the acquisition data comprises an acquired face image and identification information of the first camera device;
the second acquisition module is used for acquiring the position description information corresponding to the identification information of the first camera device and transmitting the position description information to the determination module;
the determining module is used for determining the mobile data of the target user identified by the face image based on the position description information of the first camera device and transmitting the mobile data to the display module;
and the display module is used for displaying the movement state information of the target user by utilizing the movement data.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining module, when determining the mobile data of the user identity identified by the face image based on the location description information of the first camera device, is configured to:
and under the condition that the historical movement data of the target user is obtained within a set time period before the acquisition time is detected, updating the historical movement data based on the position description information of the first camera device to obtain the movement data.
In one possible embodiment, the movement data comprises the historical movement data and location description information of the first camera, the historical movement data comprising location description information of at least one second camera;
the second camera device is a camera device which collects the face image of the target user in a set time period before the collection time.
In one possible embodiment, the presentation module, when presenting the movement status information of the target user by using the movement data, is configured to:
and switching the displayed movement state information of the target user corresponding to the historical movement data into the movement state information of the target user corresponding to the movement data according to the set switching special effect.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining module, when determining the mobile data of the user identity identified by the face image based on the location description information of the first camera device, is configured to:
and when the situation that the historical movement data of the target user is not obtained in a set time period before the acquisition time is detected, taking the position description information of the first camera device as the movement data.
In a possible implementation manner, when the movement data is used to display the movement state information of the target user, the display module is specifically configured to:
and displaying the moving state information of the target user corresponding to the moving data according to the set display special effect.
In one possible embodiment, the movement status information includes one or more of the following information:
the position identification of the area passed by the target user, the stay time of the target user in each area, and the number of times of the target user passing through the same area.
In one possible embodiment, after determining the movement data of the target user identified by the face image, the determining module is further configured to:
acquiring mobile data of a plurality of users including the target user;
determining at least one target movement data by using the movement data of the plurality of users;
the display module is further configured to:
and displaying the description information of the target movement data by utilizing the at least one type of target movement data.
In a possible embodiment, the determining module, when determining at least one target movement data using the movement data of the plurality of users, is configured to:
determining a plurality of types of movement data respectively representing different types of movement routes based on the movement data of the plurality of users;
determining heat information of each mobile data in the plurality of mobile data;
and determining the at least one target moving data based on the heat information of each moving data.
In one possible embodiment, the heat information includes the amount of movement data characterizing each movement route;
the determining module, when determining the at least one target movement data based on the heat information of each type of movement data, is specifically configured to:
and selecting at least one type of mobile data with the quantity larger than a set threshold value as the at least one type of mobile data based on the quantity of the mobile data representing each type of mobile route in the plurality of types of mobile data, or arranging the quantities respectively corresponding to the plurality of types of mobile data from large to small, and selecting the mobile data with the quantity arranged at the top N correspondingly as the at least one type of mobile data, wherein N is a positive integer.
In one possible implementation, the description information of the target movement data includes a movement route corresponding to the target movement data, the number of users who pass through the movement route, and a time length taken for passing through the movement route.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate via the bus when the electronic device is running, and the machine-readable instructions, when executed by the processor, perform the steps of the method for presenting mobile status information according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for presenting moving state information according to the first aspect or any one of the embodiments.
For the description of the effects of the moving state information presentation apparatus, the electronic device, and the computer-readable storage medium, reference is made to the description of the moving state information presentation method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic flow chart illustrating a method for displaying mobile status information according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a mobile status information presentation of a target user according to an embodiment of the present disclosure;
fig. 3 illustrates a method for screening and displaying target mobile data according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a description information presentation of target movement data provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an architecture of a mobile status information presentation apparatus according to an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Firstly, the application scene suitable for the scheme is introduced. The present disclosure may be applied to an electronic device having a data processing capability, and the electronic device may be configured with a display device for displaying a data processing result, or may be externally connected with a display device for displaying a data processing result, and a connection method thereof is not limited to a wired connection, and/or a wireless connection. The electronic equipment can also be connected with at least one camera device (the connection mode is not limited to wired connection and/or wireless connection), the camera device collects the face image and transmits the face image to the electronic equipment for processing, and the electronic equipment can display the face image through a display device configured on the electronic equipment or a display device externally connected with the electronic equipment after the processing is finished. The electronic device may be, for example, a mobile phone, a tablet computer, a smart television, a computer, and the like, which is not limited in the present application.
It should be noted that the face image included in the acquired data in the method provided by the present disclosure is a face image acquired by a first camera at a certain time, and is not necessarily identical to a face image of a target user acquired by another camera, or a face image of the same target user acquired by the first camera at another time, for example, the face images of the target user acquired from different shooting angles may be obtained, and when performing image recognition, it may be determined that multiple face images correspond to the same target user through processing such as feature extraction and recognition.
It should be noted that, in the embodiments of the present disclosure, the first camera device is used for referring to a camera device corresponding to collected data currently being processed by an electronic device (e.g., a mobile phone, a tablet computer, a smart television, a computer, etc.), and the second camera device is used for referring to a camera device corresponding to collected data already processed by the electronic device (e.g., a mobile phone, a tablet computer, a smart television, a computer, etc.). In the embodiments of the present disclosure, all image pickup apparatuses including the first image pickup apparatus and/or the second image pickup apparatus may be collectively referred to as an image pickup apparatus without being particularly specified.
For facilitating understanding of the embodiments of the present disclosure, a detailed description will be first given of a method for displaying moving state information disclosed in the embodiments of the present disclosure.
Referring to fig. 1, a flow chart of a method for displaying mobile status information provided by the embodiment of the present disclosure is schematically shown, including the following steps:
s101, acquiring data of the first camera device, wherein the acquired data comprise acquired face images and identification information of the first camera device.
And S102, acquiring position description information corresponding to the identification information of the first camera device.
And S103, determining the movement data of the target user identified by the face image based on the position description information of the first camera device.
And S104, displaying the movement state information of the target user by using the movement data.
In the method, the movement data of the target user identified by the face image collected in the first camera device can be determined based on the position description information of the first camera device, and the movement state information of the target user is displayed by using the movement data, so that route guidance can be provided for other users.
Hereinafter, S101 to S104 will be described.
For S101:
the face images acquired by the first camera device can comprise faces of a plurality of users, the users comprise target users, and the face images can also comprise a plurality of face images acquired at different moments.
For S102:
in a possible embodiment, the correspondence between the identification information of each camera and the location description information thereof may be pre-stored, where the location description information of the camera may be description information of a location area where the camera is deployed, and the description information of the location area where the camera is deployed may be, for example, an identifier of the location area where the camera is deployed, and the identifier may be, for example, an exit/entrance identifier, an identifier of a different exhibition hall of an exhibition, or an identifier of the camera itself directly to identify the location area where the camera is deployed, and the like, which is not limited in this application. For example, in a case where a plurality of exhibition halls or exhibition areas are deployed in an exhibition, then, for each image pickup apparatus, identification information of the image pickup apparatus and identification information of the exhibition area or exhibition hall corresponding to the identification information may be stored in advance.
After the collected data of the first camera device is acquired, the position description information corresponding to the identification information in the collected data may be searched in the correspondence relationship stored in advance.
The mobile state information display method provided by the disclosure can be applied to electronic equipment with processing capability, and after acquiring the acquired data of the first camera device, the electronic equipment can directly search the position description information corresponding to the identification information of the first camera device in the electronic equipment according to the identification information of the first camera device in the acquired data of the first camera device; the electronic device may also send the identification information of the first camera to the server, the server searches for the location description information corresponding to the identification information of the first camera, and the server sends the found location description information to the electronic device after finding the location description information corresponding to the identification information of the first camera.
For S103:
in one possible embodiment, the target user may be a user that has appeared most recently in the area of the location where the first camera is deployed. For example, the electronic device may determine the newly present user by: after the face image acquired by each camera device is acquired, the image features of the acquired face image can be extracted, the acquired image features are recorded, after the face image acquired by any camera device is received again, the image features of the newly acquired face image can be extracted and compared with the image features of the user which are recorded in advance and pass through in a historical set time period, and under the condition that the comparison is unsuccessful, the user identified by the image features is determined to be a target user, namely the user which appears latest at present.
In another possible embodiment, the target user may also be a preset user, where the preset user may be, for example, a certain known person known in advance by the exhibition, and for example, the electronic device may determine the target user in the image captured by the first camera by: the method comprises the steps of determining preset user characteristics (such as facial characteristics, gait characteristics and the like) of a user, extracting the user characteristics of the user in an image collected by a first camera device, comparing the extracted user characteristics with the preset user characteristics of the user, and determining the user who is successfully compared as a target user. The user features may be, for example, face features, gait features, pedestrian Re-identification (REID) features, and the like.
In another possible embodiment, the target user may also be a user that meets certain attributes; the specific attribute may be a preset user attribute, and the user attribute may include at least one of attributes of gender, height, age, and the like, for example. For example, the electronic device may determine the target user in the image captured by the first camera device by: and extracting the user attribute of the user in the image acquired by the first camera device, judging whether the extracted user attribute accords with the set specific attribute, and if so, determining the user as the target user.
For example, in the case that the target user is a user who meets a certain attribute, the target user may include each user who meets the certain attribute, that is, the target user includes a plurality of users; the first detected user meeting a certain specific attribute may also be determined as the target user, i.e. the target user includes only one user.
In an embodiment, the acquisition data of the first camera device may further include the acquisition time of the face image.
After the position description information corresponding to the identification information of the first camera device is found, the user characteristics of the target user identified by the face image collected by the first camera device can be extracted, and then based on the extracted user characteristics, whether historical movement data corresponding to the user characteristics in a set time period before the collection time exists or not is found in the prestored historical movement data.
The user features may be, for example, face features, gait features, pedestrian Re-identification (REID) features, and the like. The movement data comprises historical movement data and position description information of the first camera device, the historical movement data comprises position description information of at least one second camera device, and the second camera device is a camera device which acquires a face image of the target user within a set time period before the acquisition time.
Based on the acquisition time, the face image and the position description information of the camera device included in the acquisition data, the mobile data indicating the association relationship between the target user corresponding to the face image, the shooting area corresponding to the camera device and the acquisition time can be obtained, that is, the mobile data can include the time when the target user appears in the shooting area of the camera device and the time when the target user leaves the shooting area of the camera device, wherein the time when the target user appears in the shooting area of the camera device is the acquisition time of the image when the target user appears in the image shot by the camera device for the first time; the time when the target user leaves the shooting area of the camera device is the acquisition time of the image when the target user appears in the image shot by the camera device for the last time.
The determining of the movement data of the user identity identified by the face image based on the location description information of the first camera device may be based on whether historical movement data of the target user is obtained within a preset time period before the acquisition time, and may be divided into the following cases:
in the first case, when the historical movement data of the target user is obtained within a preset time period before the acquisition time is detected, the historical movement data may be updated based on the position description information of the first camera device, so as to obtain updated movement data.
If the historical movement data of the target user is obtained within a preset time period before the acquisition time, the target user is shown to be present in the image acquisition area of other camera devices except the current first camera device, and the face images acquired by the other camera devices comprise the target user before the acquisition time. In this case, the historical movement data may be updated based directly on the location description information of the first camera, and the updated movement data may be used to describe movement information of the target user during a period from a preset time period before the acquisition time to the acquisition time.
And in the second case, when the historical movement data of the target user is not obtained in the set time period before the acquisition time is detected, the position description information of the first camera device is used as the movement data.
If the historical movement data of the target user is not obtained within the preset time period before the acquisition time, it is indicated that the target user does not appear in the image acquisition area of any camera device, or the target user is not included in the face images acquired by other camera devices before the acquisition time.
In one possible implementation, after determining the movement data of the target user identified by the face image, the movement data may also be stored as historical movement data of the target user. In the case where a face image including a target user is acquired in another image pickup device after the acquisition time, the movement data of the target user is newly determined based on the latest stored historical movement data and the position description information of the image pickup device.
The movement data obtained based on the above embodiment includes not only the description information of the current location of the target user (i.e., the location description information of the first camera device) but also the historical movement data, whereby information reflecting the history and the current movement state can be vividly presented.
For S104:
the mobility state information includes one or more of the following information:
the position identification of the area passed by the target user, the stay time of the target user in each area, and the number of times of the target user passing through the same area.
The location identifier of the area through which the target user passes may be an identifier of a location area where the image capturing device that captures the image of the face of the target user is disposed (or the location area may also be referred to as a capturing area of the image capturing device), and may be an identifier of a certain exhibition hall, for example.
For example, for the staying time length of the target user in each area, the difference between the time when the target user appears in each area and the time when the target user leaves the area in the movement data may be used as the staying time length of the target user in each area.
The number of times that the target user passes through the same area may be determined as the number of times that the target user passes through the area, where the number of times that the target user leaves the area is recorded in the movement data of the target user.
For example, if the time for the target user to leave the area a is 10:00, 11:00, and 12:00 in the movement data, and the movement data includes three times for the target user to leave the area a, the number of times for the target user to pass through the area a is 3.
In a specific implementation, the device for displaying the movement state information of the target user may be an electronic device configured with a display device for displaying the processing data, or may be an external display device of the electronic device. For example, if the television is configured with a display device (i.e., a screen of a tablet computer), the tablet computer may display the movement status information of the target user on the tablet computer by using the movement data.
Under the condition that the mobile data is used for displaying the mobile state information of the target user, if historical mobile data of the target user is obtained within a preset time period before the acquisition time, the electronic equipment currently displays the mobile state information of the target user, the currently displayed mobile state information is the mobile state information corresponding to the historical mobile data, and after the mobile data of the target user is redetermined, the displayed mobile state information of the target user corresponding to the historical mobile data can be switched to the mobile state information of the target user corresponding to the newly determined mobile data according to a set switching special effect.
In a specific implementation, the movement state information may be drawn and displayed in a computer drawing manner, for example, a movement route of the target user is displayed, and the movement state information of the target user is displayed by a method of displaying the movement route, where the movement route may include at least one location point, each location point may have a corresponding location area (for example, an exhibition hall), and the location area is a location area where the camera is deployed (or may also be a shooting area corresponding to the camera device).
For example, as shown in fig. 2, a point A, B, C, D, E in fig. 2 identifies location description information of different camera devices in a preset location area, and when a camera device corresponding to any one of the location points A, B, C, D, E captures a face image containing a target user, the display state of the location point is changed. In fig. 2, the route connected by the position points A, B, C is the historical movement route of the target user, the historical movement route is drawn according to the movement state information of the target user corresponding to the historical movement data, and the imaging device corresponding to A, B, C is the second imaging device. If the camera device corresponding to the position point D is the first camera device, after the first camera device obtains the movement data through processing the collected data, the first camera device may display the movement state information of the target user by using the movement data, for example, a connection line may be drawn between C and D, and the display state of the position point D may be changed.
In some embodiments, when the movement state information of the target user is displayed, the identification information of the target user and the facial image of the target user may also be displayed at the same time, where the identification information of the target user may be previously assigned to the target user, and the facial image of the target user may be extracted from the facial image acquired by the first camera device.
For example, the display state of the position point may be changed by lighting the position point or changing the color state of the position point, which is not limited in the present application.
After changing the state information of the location point, the movement state information of the target user corresponding to the location point may also be displayed at the location point, for example, description information (for example, an exhibition hall identifier, or an exit/entrance identifier, etc.) of a location area corresponding to the location point, the number of times that the target user passes through the location area corresponding to the location point, and the staying time of the target user in the location area corresponding to the location point may be displayed.
Continuing with the example of fig. 2, at location point A, B, C, D, the description information of the location area corresponding to the location point, the number of times the target user passes through the corresponding location area of the location point (i.e., the number of visits in the figure), and the dwell time of the target user in the location area corresponding to the location point may be shown, respectively.
When the mobile data is used for displaying the mobile state information of the target user, if the historical mobile data of the target user is not obtained within a preset time period before the acquisition time, the mobile state information of the target user corresponding to the mobile data can be displayed according to a set display special effect.
Continuing with the example of fig. 2, if the imaging device corresponding to location point a is the first imaging device, the display state of location point a may be changed in the device displaying the state information, for example, location point a may be lit, or the color state of location point a may be changed.
The reason why the historical movement data of the target user is not obtained in the time period before the acquisition time may be that the target user does not appear in the preset position area before the acquisition time, at this time, the first camera device may be a camera device at an entrance position of the preset position area, and when the target user enters the preset position area through the entrance position, the first camera device acquires a face image of the target user.
In some embodiments of the present disclosure, after determining the movement data of the target user identified by the face, the target movement data may be further filtered from the movement data including a plurality of users, and description information of the target movement data is displayed. That is, the target movement data may be filtered based on the movement data of a plurality of users, and the related description information of the target movement data may be presented as a reference for an action route that the user may select or a route that the user focuses on.
Specifically, as shown in fig. 3, the method for screening and displaying target mobile data provided by the embodiment of the present disclosure includes the following steps:
s301, mobile data of a plurality of users including the target user are obtained.
The plurality of users including the target user are users appearing in a preset position area, and the preset position area comprises position areas where a plurality of camera devices are respectively deployed. In one possible implementation, the movement data corresponding to each user may be obtained through analysis based on the acquired data acquired from the camera devices respectively deployed in each location area. The specific manner of determining the movement data can be seen in the above embodiments.
S302, at least one target movement data is determined by using the movement data of a plurality of users.
Specifically, the plurality of types of movement data respectively representing different types of movement routes may be determined based on the movement data of the plurality of users, then the heat information of each type of movement data in each type of movement data may be determined, and then the at least one type of target movement data may be determined based on the heat information of each type of movement route.
Wherein the heat information includes a quantity of movement data characterizing each movement route, and the quantity of each movement data may represent a quantity of users passing through the movement route characterized by the movement data.
In some embodiments of the application, at least one mobile data, the quantity of which is greater than a set threshold, may be selected as the at least one mobile data based on the quantity of the mobile data characterizing each moving route in the multiple mobile data, or the quantities corresponding to the multiple mobile data respectively are arranged from large to small, and the mobile data, the quantity of which is arranged in the front of the corresponding quantity, is selected as the at least one mobile data, where N is a positive integer.
S303, displaying the description information of the target movement data by using at least one type of target movement data.
The description information of the target movement data comprises a movement route corresponding to the target movement data, the number of users passing through the movement route and the time spent on passing through the movement route.
In presenting the description information of the target movement data, for example, as shown in fig. 4, the preset location area is an exhibition hall, the exhibition hall includes A, B, C, D, E five entrances, and each entrance is provided with an exhibition point, each entrance of the exhibition hall is vertically represented in fig. 4, the order of the user reaching the exhibition point is horizontally represented, there are two types of target movement data, which correspond to the movement route 1 and the movement route 2, respectively, and taking the movement route 1 as an example, the user enters the exhibition hall from a, then reaches the exhibition point at the entrance C, then reaches the exhibition point at the entrance B, and finally returns the exhibition point at the entrance a. The thickness or color of the different broken lines indicates the number of people passing through the line, and as shown in fig. 4, if the number of people passing through the moving line 1 is greater than that of people passing through the moving line 2, the moving line 1 is thicker than the moving line 2 during the display. In addition, the attention of the movement route with high heat can be raised by raising the brightness of the movement route 1 and the like.
In another example, description information of the target movement data may also be presented in the form of table 1 below.
TABLE 1
Route of road | Number of visitors | Average residence time |
A-B-C-A-E | 688 | 01:30:24 |
A-C-B-A | 567 | 00:54:30 |
B-A-C-D-E | 432 | 01:47:30 |
A-B-A | 210 | 00:30:00 |
A-E-D-C-B | 199 | 01:54:30 |
The average staying time in table 1 above may be the average staying time of the user in each exhibition hall under the route, or the average staying time of the user on the route.
Based on the above embodiment, the target mobile data can be screened out from the mobile data of a plurality of users, and the description information of the target mobile data is displayed, so that the important display of part of the mobile data is realized, for example, the display introduction of hot spots is performed.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a mobile status information display apparatus, as shown in fig. 5, which is an architecture schematic diagram of the mobile status information display apparatus provided in the embodiment of the present disclosure, and includes a first obtaining module 501, a second obtaining module 502, and a determining module 503, specifically:
the system comprises a first acquisition module 501, a second acquisition module 502 and a display module, wherein the first acquisition module is used for acquiring acquisition data of a first camera device and transmitting the acquisition data to the second acquisition module, and the acquisition data comprises an acquired face image and identification information of the first camera device;
a second obtaining module 502, configured to obtain location description information corresponding to the identification information of the first camera, and transmit the location description information to a determining module 503;
the determining module 503 is configured to determine, based on the location description information of the first camera, the movement data of the target user identified by the face image, and transmit the movement data to the displaying module;
and the display module is used for displaying the movement state information of the target user by utilizing the movement data.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining module 503, when determining the movement data of the user identity identified by the face image based on the location description information of the first camera, is configured to:
and under the condition that the historical movement data of the target user is obtained within a set time period before the acquisition time is detected, updating the historical movement data based on the position description information of the first camera device to obtain the movement data.
In one possible embodiment, the movement data comprises the historical movement data and location description information of the first camera, the historical movement data comprising location description information of at least one second camera;
the second camera device is a camera device which collects the face image of the target user in a set time period before the collection time.
In one possible embodiment, the presentation module, when presenting the movement status information of the target user by using the movement data, is configured to:
and switching the displayed movement state information of the target user corresponding to the historical movement data into the movement state information of the target user corresponding to the movement data according to the set switching special effect.
In a possible embodiment, the acquisition data further includes an acquisition time of the face image;
the determining module 503, when determining the movement data of the user identity identified by the face image based on the location description information of the first camera, is configured to:
and when the situation that the historical movement data of the target user is not obtained in a set time period before the acquisition time is detected, taking the position description information of the first camera device as the movement data.
In a possible implementation manner, when the movement data is used to display the movement state information of the target user, the display module is specifically configured to:
and displaying the moving state information of the target user corresponding to the moving data according to the set display special effect.
In one possible embodiment, the movement status information includes one or more of the following information:
the position identification of the area passed by the target user, the stay time of the target user in each area, and the number of times of the target user passing through the same area.
In a possible implementation, the determining module 503, after determining the movement data of the target user identified by the face image, is further configured to:
acquiring mobile data of a plurality of users including the target user;
determining at least one target movement data by using the movement data of the plurality of users;
the display module is further configured to:
and displaying the description information of the target movement data by utilizing the at least one type of target movement data.
In a possible implementation manner, the determining module 503, when determining at least one target movement data by using the movement data of the plurality of users, is configured to:
determining a plurality of types of movement data respectively representing different types of movement routes based on the movement data of the plurality of users;
determining heat information of each mobile data in the plurality of mobile data;
and determining the at least one target moving data based on the heat information of each moving data.
In one possible embodiment, the heat information includes the amount of movement data characterizing each movement route;
the determining module 503, when determining the at least one target movement data based on the heat information of each type of movement data, is specifically configured to:
and selecting at least one type of mobile data with the quantity larger than a set threshold value as the at least one type of mobile data based on the quantity of the mobile data representing each type of mobile route in the plurality of types of mobile data, or arranging the quantities respectively corresponding to the plurality of types of mobile data from large to small, and selecting the mobile data with the quantity arranged at the top N correspondingly as the at least one type of mobile data, wherein N is a positive integer.
In one possible implementation, the description information of the target movement data includes a movement route corresponding to the target movement data, the number of users who pass through the movement route, and a time length taken for passing through the movement route.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 6, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 601, a memory 602, and a bus 603. The memory 602 is used for storing execution instructions and includes a memory 6021 and an external memory 6022; the memory 6021 is also referred to as an internal memory, and is configured to temporarily store the operation data in the processor 601 and the data exchanged with the external memory 6022 such as a hard disk, the processor 601 exchanges data with the external memory 6022 through the memory 6021, and when the electronic device 600 operates, the processor 601 communicates with the memory 602 through the bus 603, so that the processor 601 executes the following instructions:
acquiring collected data of a first camera device, wherein the collected data comprises a collected face image and identification information of the first camera device;
acquiring position description information corresponding to the identification information of the first camera device;
determining the movement data of the target user identified by the face image based on the position description information of the first camera device;
and displaying the moving state information of the target user by using the moving data.
The specific processing procedure executed by the processor 601 may refer to the description in the above method embodiment, and is not further described here.
In addition, the embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the moving state information presentation method in the above method embodiment are executed.
The computer program product of the moving state information display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the moving state information display method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A method for displaying mobile state information is characterized by comprising the following steps:
acquiring collected data of a first camera device, wherein the collected data comprises a collected face image and identification information of the first camera device;
acquiring position description information corresponding to the identification information of the first camera device;
determining the movement data of the target user identified by the face image based on the position description information of the first camera device;
and displaying the moving state information of the target user by using the moving data.
2. The method of claim 1, wherein the acquisition data further comprises an acquisition time of the face image;
the determining the mobile data of the user identity identified by the face image based on the position description information of the first camera device comprises:
and under the condition that the historical movement data of the target user is obtained within a set time period before the acquisition time is detected, updating the historical movement data based on the position description information of the first camera device to obtain the movement data.
3. The method of claim 2, wherein the movement data comprises the historical movement data and location description information for the first camera, the historical movement data comprising location description information for at least one second camera;
the second camera device is a camera device which collects the face image of the target user in a set time period before the collection time.
4. The method according to claim 2 or 3, wherein the presenting the movement status information of the target user by using the movement data comprises:
and switching the displayed movement state information of the target user corresponding to the historical movement data into the movement state information of the target user corresponding to the movement data according to the set switching special effect.
5. The method of any of claims 1 to 4, wherein the acquisition data further comprises an acquisition time of the face image;
the determining the mobile data of the user identity identified by the face image based on the position description information of the first camera device comprises:
and when the situation that the historical movement data of the target user is not obtained in a set time period before the acquisition time is detected, taking the position description information of the first camera device as the movement data.
6. The method of claim 5, wherein the presenting the movement status information of the target user using the movement data comprises:
and displaying the moving state information of the target user corresponding to the moving data according to the set display special effect.
7. The method according to any of claims 1 to 6, wherein the mobility state information comprises one or more of the following:
the position identification of the area passed by the target user, the stay time of the target user in each area, and the number of times of the target user passing through the same area.
8. A mobile status information presentation device, comprising:
the system comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring acquisition data of a first camera device and transmitting the acquisition data to the second acquisition module, and the acquisition data comprises an acquired face image and identification information of the first camera device;
the second acquisition module is used for acquiring the position description information corresponding to the identification information of the first camera device and transmitting the position description information to the determination module;
the determining module is used for determining the mobile data of the target user identified by the face image based on the position description information of the first camera device and transmitting the mobile data to the display module;
and the display module is used for displaying the movement state information of the target user by utilizing the movement data.
9. An electronic device, comprising: processor, memory and bus, the memory stores machine readable instructions executable by the processor, the processor and the memory communicate through the bus when the electronic device runs, the machine readable instructions when executed by the processor perform the steps of the moving state information presentation method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program for performing, when being executed by a processor, the steps of the method for presenting mobile state information according to any one of claims 1 to 7.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911090206.7A CN110765984A (en) | 2019-11-08 | 2019-11-08 | Mobile state information display method, device, equipment and storage medium |
KR1020217015808A KR20210077760A (en) | 2019-11-08 | 2020-07-22 | Method and apparatus for displaying movement state information, electronic device and recording medium |
JP2021528395A JP2022510135A (en) | 2019-11-08 | 2020-07-22 | Movement status information Display methods, devices, electronic devices, and recording media |
PCT/CN2020/103457 WO2021088417A1 (en) | 2019-11-08 | 2020-07-22 | Movement state information display method and apparatus, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911090206.7A CN110765984A (en) | 2019-11-08 | 2019-11-08 | Mobile state information display method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110765984A true CN110765984A (en) | 2020-02-07 |
Family
ID=69336951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911090206.7A Pending CN110765984A (en) | 2019-11-08 | 2019-11-08 | Mobile state information display method, device, equipment and storage medium |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022510135A (en) |
KR (1) | KR20210077760A (en) |
CN (1) | CN110765984A (en) |
WO (1) | WO2021088417A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461031A (en) * | 2020-04-03 | 2020-07-28 | 银河水滴科技(北京)有限公司 | Object recognition system and method |
CN111540026A (en) * | 2020-03-24 | 2020-08-14 | 北京三快在线科技有限公司 | Dynamic line drawing method and device, electronic equipment and storage medium |
CN112037246A (en) * | 2020-08-26 | 2020-12-04 | 睿住科技有限公司 | Monitoring system, community movement information measuring method, measuring device and storage medium |
WO2021088417A1 (en) * | 2019-11-08 | 2021-05-14 | 北京市商汤科技开发有限公司 | Movement state information display method and apparatus, electronic device and storage medium |
CN112987916A (en) * | 2021-02-06 | 2021-06-18 | 北京智扬天地展览服务有限公司 | Automobile exhibition stand interaction system and method |
CN114766018A (en) * | 2020-11-12 | 2022-07-19 | 京东方科技集团股份有限公司 | Method, device and storage medium for simultaneously displaying state information of multiple devices |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113918838B (en) * | 2021-11-12 | 2024-04-12 | 合众新能源汽车股份有限公司 | Target crowd identification method, system and readable medium based on stay data |
CN116521119B (en) * | 2023-06-30 | 2023-09-12 | 中卫信软件股份有限公司 | Communication method of hardware equipment and browser in information system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011145114A (en) * | 2010-01-13 | 2011-07-28 | Alpine Electronics Inc | Running track display for navigation device |
CN107389078A (en) * | 2017-06-23 | 2017-11-24 | 咪咕互动娱乐有限公司 | A kind of route recommendation method, apparatus and computer-readable recording medium |
EP3419283A1 (en) * | 2017-06-21 | 2018-12-26 | Axis AB | System and method for tracking moving objects in a scene |
CN109711249A (en) * | 2018-11-12 | 2019-05-03 | 平安科技(深圳)有限公司 | Personage's motion profile method for drafting, device, computer equipment and storage medium |
CN109960969A (en) * | 2017-12-22 | 2019-07-02 | 杭州海康威视数字技术股份有限公司 | The method, apparatus and system that mobile route generates |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3584334B2 (en) * | 1997-12-05 | 2004-11-04 | オムロン株式会社 | Human detection tracking system and human detection tracking method |
GB2404466B (en) * | 2003-07-31 | 2007-07-18 | Hewlett Packard Development Co | Method and apparatus for providing information about a real-world space |
JP5368723B2 (en) * | 2008-04-09 | 2013-12-18 | キヤノン株式会社 | Imaging apparatus and control method thereof |
US9741191B1 (en) * | 2016-03-31 | 2017-08-22 | Kyocera Document Solutions Inc. | System and method for recording waypoint images along a route |
CN105933650A (en) * | 2016-04-25 | 2016-09-07 | 北京旷视科技有限公司 | Video monitoring system and method |
JP7081081B2 (en) * | 2017-03-03 | 2022-06-07 | 日本電気株式会社 | Information processing equipment, terminal equipment, information processing method, information output method, customer service support method and program |
CN107452027A (en) * | 2017-07-29 | 2017-12-08 | 安徽博威康信息技术有限公司 | A kind of target person method for security protection based on multi-cam monitoring |
CN110533553B (en) * | 2018-05-25 | 2023-04-07 | 阿里巴巴集团控股有限公司 | Service providing method and device |
CN109886078B (en) * | 2018-12-29 | 2022-02-18 | 华为技术有限公司 | Retrieval positioning method and device for target object |
CN110765984A (en) * | 2019-11-08 | 2020-02-07 | 北京市商汤科技开发有限公司 | Mobile state information display method, device, equipment and storage medium |
CN110851646B (en) * | 2019-11-18 | 2020-11-24 | 嵊州市万睿科技有限公司 | Working efficiency statistical method for intelligent park |
-
2019
- 2019-11-08 CN CN201911090206.7A patent/CN110765984A/en active Pending
-
2020
- 2020-07-22 JP JP2021528395A patent/JP2022510135A/en active Pending
- 2020-07-22 WO PCT/CN2020/103457 patent/WO2021088417A1/en active Application Filing
- 2020-07-22 KR KR1020217015808A patent/KR20210077760A/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011145114A (en) * | 2010-01-13 | 2011-07-28 | Alpine Electronics Inc | Running track display for navigation device |
EP3419283A1 (en) * | 2017-06-21 | 2018-12-26 | Axis AB | System and method for tracking moving objects in a scene |
CN107389078A (en) * | 2017-06-23 | 2017-11-24 | 咪咕互动娱乐有限公司 | A kind of route recommendation method, apparatus and computer-readable recording medium |
CN109960969A (en) * | 2017-12-22 | 2019-07-02 | 杭州海康威视数字技术股份有限公司 | The method, apparatus and system that mobile route generates |
CN109711249A (en) * | 2018-11-12 | 2019-05-03 | 平安科技(深圳)有限公司 | Personage's motion profile method for drafting, device, computer equipment and storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021088417A1 (en) * | 2019-11-08 | 2021-05-14 | 北京市商汤科技开发有限公司 | Movement state information display method and apparatus, electronic device and storage medium |
CN111540026A (en) * | 2020-03-24 | 2020-08-14 | 北京三快在线科技有限公司 | Dynamic line drawing method and device, electronic equipment and storage medium |
CN111461031A (en) * | 2020-04-03 | 2020-07-28 | 银河水滴科技(北京)有限公司 | Object recognition system and method |
CN111461031B (en) * | 2020-04-03 | 2023-10-24 | 银河水滴科技(宁波)有限公司 | Object recognition system and method |
CN112037246A (en) * | 2020-08-26 | 2020-12-04 | 睿住科技有限公司 | Monitoring system, community movement information measuring method, measuring device and storage medium |
CN114766018A (en) * | 2020-11-12 | 2022-07-19 | 京东方科技集团股份有限公司 | Method, device and storage medium for simultaneously displaying state information of multiple devices |
CN112987916A (en) * | 2021-02-06 | 2021-06-18 | 北京智扬天地展览服务有限公司 | Automobile exhibition stand interaction system and method |
Also Published As
Publication number | Publication date |
---|---|
JP2022510135A (en) | 2022-01-26 |
KR20210077760A (en) | 2021-06-25 |
WO2021088417A1 (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765984A (en) | Mobile state information display method, device, equipment and storage medium | |
CN106650671B (en) | Face recognition method, device and system | |
CN110942055A (en) | State identification method, device and equipment for display area and storage medium | |
JP2013157984A (en) | Method for providing ui and video receiving apparatus using the same | |
KR20190106347A (en) | SYSTEM FOR PROVIDING CUSTOMIZED WELCOME DATA BY RECOGNIZING CUSTOMER USING IoT DEVICES | |
CN111640169A (en) | Historical event presenting method and device, electronic equipment and storage medium | |
CN111126288B (en) | Target object attention calculation method, target object attention calculation device, storage medium and server | |
CN113630721A (en) | Method and device for generating recommended tour route and computer readable storage medium | |
JP5423740B2 (en) | Video providing apparatus, video using apparatus, video providing system, video providing method, and computer program | |
CN113591663A (en) | Exhibition data visualization information analysis system and analysis method | |
CN113869115A (en) | Method and system for processing face image | |
EP3570207A1 (en) | Video cookies | |
CN112699159A (en) | Data display method, device and equipment | |
KR20100005960A (en) | Information system for fashion | |
JP2018151720A (en) | Vacant seat detection program, vacant seat detection device, and vacant seat detection method | |
CN111640190A (en) | AR effect presentation method and apparatus, electronic device and storage medium | |
JP2010074628A (en) | Intercom system | |
CN111260537A (en) | Image privacy protection method and device, storage medium and camera equipment | |
EP3217320A1 (en) | Person verification system and person verification method | |
CN115223085A (en) | Flow adjustment method and device for risk personnel, electronic equipment and storage medium | |
JP4561400B2 (en) | Monitoring device | |
JP2018151840A (en) | System, method and program for collation | |
CN111640186A (en) | AR special effect generation method and device for building, electronic device and storage medium | |
WO2016035632A1 (en) | Data processing device, data processing system, data processing method, and program | |
JP7371806B2 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40014901 Country of ref document: HK |
|
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200207 |
|
RJ01 | Rejection of invention patent application after publication |