CN111401206A - Panorama sharing method, system, device and medium - Google Patents

Panorama sharing method, system, device and medium Download PDF

Info

Publication number
CN111401206A
CN111401206A CN202010166253.1A CN202010166253A CN111401206A CN 111401206 A CN111401206 A CN 111401206A CN 202010166253 A CN202010166253 A CN 202010166253A CN 111401206 A CN111401206 A CN 111401206A
Authority
CN
China
Prior art keywords
panorama
target object
target
sharing
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010166253.1A
Other languages
Chinese (zh)
Inventor
姚志强
周曦
朱鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengrui Chongqing Artificial Intelligence Technology Research Institute Co ltd
Original Assignee
Hengrui Chongqing Artificial Intelligence Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengrui Chongqing Artificial Intelligence Technology Research Institute Co ltd filed Critical Hengrui Chongqing Artificial Intelligence Technology Research Institute Co ltd
Priority to CN202010166253.1A priority Critical patent/CN111401206A/en
Publication of CN111401206A publication Critical patent/CN111401206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a panorama sharing method, a system, equipment and a medium, comprising the following steps: acquiring continuous multi-frame images, and performing quality evaluation on one or more target objects in each frame of image to acquire a quality evaluation result; determining an output target and a panoramic image corresponding to the output target from the multi-frame image containing the target object based on the quality evaluation result, and performing sharing processing to obtain a currently shared panoramic image; the invention can effectively reduce memory consumption, reduce equipment cost and ensure the quality of the output snapshot image.

Description

Panorama sharing method, system, device and medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a panorama sharing method, system, device, and medium.
Background
In the front-end intelligent face recognition equipment such as a camera, a gate head and the like, the consumption of a memory is sensitive to cost, and in the face snapshot process, in order to obtain the optimal face of each person through snapshot, the best face of each person is cached all the time until the person leaves a lens picture. Each face needs to cache the corresponding panorama, and in general, one panorama occupies a memory space of 1920 × 1080 × 3 Bytes. The existing method stores a corresponding panorama for each face in a memory, so that the consumption of the memory linearly increases along with the increase of the flow of people, the demand on the memory is directly increased, and the cost of equipment is increased.
Disclosure of Invention
In view of the problems in the prior art, the invention provides a panorama sharing method, a system, equipment and a medium, and mainly solves the problem that the panorama sharing memory occupies too much when the traffic of people is large.
In order to achieve the above and other objects, the present invention adopts the following technical solutions.
A panorama sharing method comprises the following steps:
acquiring continuous multi-frame images, and performing quality evaluation on one or more target objects in each frame of image to acquire a quality evaluation result;
and determining an output target and a panoramic image corresponding to the output target from the multi-frame image containing the target object based on the quality evaluation result, and performing sharing processing to obtain the currently shared panoramic image.
Optionally, multiple quality scores of the same target object in the continuous multi-frame images are obtained, and the target object with the highest quality score is screened out to serve as an output target.
Optionally, after determining an output target and a panorama corresponding to the output target from a multi-frame image including a target object based on the quality evaluation result, the method further includes:
and when two or more than two output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets.
Optionally, when two or more output targets are located in the same panorama, sharing the output targets, and after determining that the panorama is a currently shared panorama of the output targets, the method further includes:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the one or more stored shared panoramas, so that the current shared panoramas comprise output targets in the one or more stored shared panoramas.
Optionally, a shooting interval is set, a video image in the shooting interval is acquired, and the video image is subjected to framing processing to acquire the continuous multi-frame image.
Optionally, the target object in the continuous multi-frame images is detected, and a plurality of target objects are obtained.
Optionally, a plurality of the target objects detected in the video image are tracked, and positions of the plurality of the target objects in the video image are distinctively marked.
Optionally, each different target object is assigned a unique code for the distinguishing mark.
Optionally, after framing, the same unique code is assigned to the same target object in the continuous multi-frame images.
Optionally, when quality evaluation is performed, acquiring the same target object in the continuous multi-frame images according to the unique code, and performing quality scoring on the same target object in each frame of image;
and comparing the quality scoring results of the same target object to obtain the target object with the highest quality score.
Optionally, the shared panoramic image is stored in a memory after being compressed.
Optionally, an image storage time limit is set, and when the image storage time limit is exceeded, the output target and the corresponding panoramic image in the memory are thrown out.
Optionally, the algorithm for detecting the target object includes: the Yolo algorithm, the SSD algorithm, the fast-RCNN algorithm.
Optionally, the algorithm for tracking the target object includes a SOART algorithm and a V-IOU algorithm.
Optionally, the target object includes a human face, a pedestrian, a vehicle.
A panorama sharing system comprising:
the quality evaluation module is used for acquiring continuous multi-frame images, evaluating the quality of one or more target objects in each frame of image and acquiring a quality evaluation result;
and the shared response module is used for determining an output target and a panoramic image corresponding to the output target from the multi-frame image containing the target object based on the quality evaluation result, and performing shared processing to obtain the currently shared panoramic image.
Optionally, the quality evaluation module includes a quality screening unit, configured to obtain multiple quality scores of the same target object in the consecutive multiple frames of images, and screen out a target object with a highest quality score as an output target.
Optionally, the shared map obtaining module is configured to, after determining an output target and a panorama corresponding to the output target from a multi-frame image including a target object based on the quality evaluation result, further include:
and when two or more than two output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets.
Optionally, the history panorama sharing module is configured to, when two or more output targets are located in the same panorama, share the output targets, and after determining that the panorama is a currently shared panorama of the output targets, further include:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the one or more stored shared panoramas, so that the current shared panoramas comprise output targets in the one or more stored shared panoramas.
Optionally, the system comprises a shooting interval setting module, configured to set a shooting interval, acquire a video image in the shooting interval, and perform framing processing on the video image to acquire the continuous multi-frame image.
Optionally, the system includes a target detection module, configured to detect the target object in the consecutive multi-frame images, and acquire a plurality of target objects.
Optionally, a target tracking module is included, configured to track a plurality of target objects detected in the video image, and perform a distinctive mark on positions of the plurality of target objects in the video image.
Optionally, a marking module is included, configured to assign a unique code to each different target object for the distinctive marking.
Optionally, the system further comprises a mark splitting module, configured to allocate the same unique code to the same target object in the consecutive multi-frame images after the framing processing.
Optionally, when quality evaluation is performed, acquiring the same target object in the continuous multi-frame images according to the unique code, and performing quality scoring on the same target object in each frame of image;
and comparing the quality scoring results of the same target object to obtain the target object with the highest quality score.
Optionally, the system comprises a cache module, configured to store the shared panorama after image compression in a memory.
Optionally, the system comprises a storage time limit setting module, configured to set an image storage time limit, and when the image storage time limit is exceeded, throw out the output target and the corresponding panorama in the memory.
Optionally, the algorithm for detecting the target object includes: the Yolo algorithm, the SSD algorithm, the fast-RCNN algorithm.
Optionally, the algorithm for tracking the target object includes a SOART algorithm and a V-IOU algorithm.
Optionally, the target object includes a human face, a pedestrian, a vehicle.
An apparatus, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the panorama sharing method.
One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause an apparatus to perform the panorama sharing method described herein.
As described above, a panorama sharing method, system, device and medium of the present invention have the following advantageous effects.
The invention can provide the panoramic image of the target object in the snapshot image, and can effectively reduce the memory consumption through the sharing of the panoramic image, thereby reducing the equipment cost; target objects with poor quality can be effectively filtered through quality evaluation, the data volume participating in recognition is reduced, and the recognition precision and efficiency are improved.
Drawings
Fig. 1 is a flowchart of a panorama sharing method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a panorama sharing system according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal device in an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a terminal device in another embodiment of the present invention.
Fig. 5 is a flowchart illustrating a process of obtaining a shared panorama according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, the present invention provides a panorama sharing method, including steps S01-S02.
In step S01, acquiring a plurality of consecutive frames of images, and performing quality evaluation on one or more target objects in each frame of image, and acquiring a quality evaluation result:
in one embodiment, a video image may be captured by a capturing device, such as an access camera. The shooting interval can be set when the video image is captured, one section of the video image in the interval time is obtained, and then the video image is divided into multiple sections to be processed respectively.
And preprocessing the acquired video image, including framing, denoising, filtering and the like, and acquiring a continuous multi-frame image corresponding to the video image after preprocessing.
In one embodiment, one or more target objects in a plurality of continuous frame images are detected through a detection algorithm. The target object may include a human face, a pedestrian, a vehicle, and the like. Here, a human face is taken as an example. The face detection algorithm can be used for detecting the face in the continuous multi-frame images, and the face detection algorithm includes, but is not limited to, Yolo, SSD, fast-RCNN algorithm. Specifically, after the face detection algorithm identifies the face, a face circumscribed rectangle frame of the target face in each frame of image is calculated, and then the position of the face circumscribed rectangle frame in each frame of image is obtained.
In one embodiment, a face tracking algorithm is adopted to track and mark the target face in each frame of image in continuous multi-frame images according to the position of the detected face circumscribed rectangle frame. Face tracking algorithms include, but are not limited to, SOART, V-IOU, etc. And allocating a unique code for each face circumscribed rectangle frame through a face tracking algorithm. Wherein, the unique code can adopt an ID number. By assigning unique ID numbers, the same target face has the same ID in different frame images, and different target faces can be distinguished by the ID.
In an embodiment, when the target face is marked, a panorama corresponding to each target face in each frame of image can be obtained.
And carrying out quality evaluation on the target face in the continuous multi-frame images. A deep neural network can be adopted to train a face quality evaluation model, and the face quality evaluation model is used for carrying out quality grading on a target face in continuous multi-frame images.
Referring to fig. 5, in an embodiment, the target faces in consecutive frames of images are distinguished according to the assigned IDs. As set forth in fig. 5, there are 4 frames of images, each containing four target faces with an ID of A, B, C, D. The quality evaluation is performed on the target face a in each frame of image. Assume that the quality score of a in the first frame image is 70 points; in the second frame image, the quality score of A is 75; in the third frame image, the quality score of A is 60 points; in the fourth image, the quality score of a was 55 points. And comparing the quality scores of the same target face, wherein the quality score of the target face A of the second frame is the highest, taking the A of the second frame as an output target, and the image of the second frame is the panorama corresponding to the A. In this way, the output target corresponding to B, C, D and the corresponding panorama can be obtained according to the quality scores of the target faces.
In step S02, an output target and a panorama corresponding to the output target are determined from the multi-frame images including the target object based on the quality evaluation result, and sharing processing is performed to obtain a currently shared panorama.
Further, based on the quality evaluation result, the process of determining an output target and a panorama corresponding to the output target from the multi-frame image including the target object is as follows:
and selecting a target object with the highest quality score from the multi-frame images containing the target objects as an output target based on the quality evaluation result, and further determining a panoramic image corresponding to the output target.
As shown in fig. 5: assume that the quality score of a in the first frame image is 70 points; in the second frame image, the quality score of A is 75; and in the third frame of image, the quality score of A is 60 scores, in the fourth frame of image, the quality score of A is 55 scores, through comparison of the quality scores of the same target face, the quality score of the target face A of the second frame is the highest, the A of the second frame is taken as an output target, and the second frame of image is the panorama corresponding to the A. In this way, the output target corresponding to B, C, D and the corresponding panorama can be obtained according to the quality scores of the target faces.
Further, after determining an output target and a panorama corresponding to the output target from a multi-frame image including a target object based on the quality evaluation result, the method further includes:
when two or more output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets:
for example: as shown in fig. 5: the first frame, the second frame, the third frame and the fourth frame are continuous video frames, and the face A, B, C, D is a current better face (output target) needing to be cached. The faces C, D are in the same panorama (third frame) so they share a panorama, and similarly faces a and B may share a panorama (second frame).
Further, when two or more output targets are located in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets, the method further includes:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the stored one or more shared panoramas, so that the current shared panoramas comprise output targets in the stored one or more shared panoramas.
As shown in fig. 5:
if a shared panorama (assumed to be the second frame) has been saved for N frames (where N is no greater than 2) prior to the third frame (the current shared panorama), the target object (A, B) in the second frame panorama and the target object (C, D) in the third frame may share the current panorama (the third frame).
In one embodiment, the output target and the corresponding panorama are stored in a memory for caching. Before caching, the panoramic image can be compressed in advance so as to reduce the size of the memory occupied by the panoramic image. Specifically, the panoramic image may be compressed by JPG coding.
In one embodiment, an image storage time limit may be set, and when the buffering time of the panoramic image exceeds the set image storage time limit, the corresponding panoramic image and the output target are thrown out. Specifically, the output targets may be thrown out in sequence according to the order of the output targets stored in the buffer queue. By setting the image storage time limit, when the snapshot data with large pedestrian volume is processed, the data in the memory can be thrown out in real time, the memory occupation is reduced, and the memory pressure is reduced.
Referring to fig. 2, the present embodiment provides a panorama sharing system for implementing the panorama sharing method described in the foregoing method embodiments. Since the technical principle of the system embodiment is similar to that of the method embodiment, repeated description of the same technical details is omitted.
In an embodiment, the panorama sharing system includes a quality evaluation module 10 and a sharing response module 11, where the quality evaluation module 10 is configured to assist in performing step S01 described in the foregoing method embodiment, and the sharing response module 11 is configured to assist in performing step S02 described in the foregoing method embodiment.
In an embodiment, the quality evaluation module includes a quality screening unit, configured to obtain multiple quality scores of the same target object in consecutive multiple frames of images, and screen out a target object with a highest quality score as an output target.
In an embodiment, the system includes a shared graph obtaining module, configured to determine, based on the quality evaluation result, an output target and a panorama corresponding to the output target from a plurality of frames of images including a target object, and further includes:
and when two or more output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets.
In an embodiment, the system includes a history panorama sharing module, configured to, when two or more output targets are located in the same panorama, share the output targets, and after determining that the same panorama is a currently shared panorama of the output targets, further include:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the stored one or more shared panoramas, so that the current shared panoramas comprise output targets in the stored one or more shared panoramas.
In one embodiment, the system includes a shooting interval setting module configured to set a shooting interval, acquire a video image within the shooting interval, and perform framing processing on the video image to acquire a plurality of consecutive frames of images.
In one embodiment, the system includes a target detection module configured to detect a target object in a plurality of consecutive frame images, and acquire a plurality of target objects.
In one embodiment, the system includes a target tracking module to track a plurality of target objects detected in the video image and to differentially mark locations of the plurality of target objects in the video image.
In one embodiment, the system includes a tagging module for assigning a unique code to each distinct target object for distinguishing tagging.
In one embodiment, the system comprises a mark splitting module used for distributing the same unique code to the same target object in the continuous multi-frame images after framing.
In one embodiment, when quality evaluation is carried out, the same target object in continuous multi-frame images is obtained according to the unique code, and quality grading is carried out on the same target object in each frame of image;
and comparing the quality scoring results of the same target object to obtain the target object with the highest quality score.
In an embodiment, the system includes a cache module, configured to store the shared panorama after image compression in a memory.
In one embodiment, the system includes a storage time limit setting module, configured to set an image storage time limit, and when the image storage time limit is exceeded, throw out the output target and the corresponding panorama in the memory.
In one embodiment, the algorithm for detecting the target object includes: the Yolo algorithm, the SSD algorithm, the fast-RCNN algorithm.
In one embodiment, the algorithm for tracking the target object comprises a SOART algorithm and a V-IOU algorithm.
In one embodiment, the target object includes a human face, a pedestrian, a vehicle.
In practical applications, the device may be a terminal device or a server, and examples of the terminal device may include a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio L layer III) player, an MP4 (Moving Picture Experts Group Audio L layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, a smart television, a wearable device, and the like, and the embodiments of the present application are not limited to specific devices.
The present embodiment also provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of steps included in the panorama sharing method in fig. 1 according to the present embodiment.
Fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103, and at least one communication bus 1104. The communication bus 1104 is used to implement communication connections between the elements. The first memory 1103 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and the first memory 1103 may store various programs for performing various processing functions and implementing the method steps of the present embodiment.
Optionally, the first processor 1101 may be, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a programmable logic device (P L D), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic elements, and the processor 1101 is coupled to the input device 1100 and the output device 1102 through a wired or wireless connection.
Optionally, the input device 1100 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; the output devices 1102 may include output devices such as a display, audio, and the like.
In this embodiment, the processor of the terminal device includes a function for executing each module of the speech recognition apparatus in each device, and specific functions and technical effects may refer to the above embodiments, which are not described herein again.
Fig. 4 is a schematic hardware structure diagram of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown, the terminal device of the present embodiment may include a second processor 1201 and a second memory 1202.
The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in fig. 1 in the above embodiment.
The second memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The second memory 1202 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the first processor 1201 is provided in the processing assembly 1200. The terminal device may further include: communication component 1203, power component 1204, multimedia component 1205, speech component 1206, input/output interfaces 1207, and/or sensor component 1208. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 1200 generally controls the overall operation of the terminal device. The processing assembly 1200 may include one or more second processors 1201 to execute instructions to perform all or part of the steps of the method illustrated in fig. 1 described above. Further, the processing component 1200 can include one or more modules that facilitate interaction between the processing component 1200 and other components. For example, the processing component 1200 can include a multimedia module to facilitate interaction between the multimedia component 1205 and the processing component 1200.
The power supply component 1204 provides power to the various components of the terminal device. The power components 1204 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 1205 includes a display screen that provides an output interface between the terminal device and the user in some embodiments, the display screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from the user.
The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a Microphone (MIC) configured to receive external voice signals when the terminal device is in an operational mode, such as a voice recognition mode. The received speech signal may further be stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the speech component 1206 further comprises a speaker for outputting speech signals.
The input/output interface 1207 provides an interface between the processing component 1200 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 1208 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor component 1208 may detect an open/closed state of the terminal device, relative positioning of the components, presence or absence of user contact with the terminal device. The sensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 1208 may also include a camera or the like.
The communication component 1203 is configured to facilitate communications between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot therein for inserting a SIM card therein, so that the terminal device may log onto a GPRS network to establish communication with the server via the internet.
As can be seen from the above, the communication component 1203, the voice component 1206, the input/output interface 1207 and the sensor component 1208 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In summary, the panorama sharing method, system, device and medium of the present invention snap faces under monitoring, access control and other scenes, select the best face, filter out a face with poor quality, and greatly reduce the number of subsequent face recognition comparison, thereby reducing the consumption of computing resources and storage resources; by means of the panorama sharing mode, the number of face panoramas cached by face snapshot is greatly reduced, memory consumption is further reduced, equipment cost is reduced, and especially when the flow of people is large, the effect is more obvious. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (32)

1. A panorama sharing method, comprising:
acquiring continuous multi-frame images, and performing quality evaluation on one or more target objects in each frame of image to acquire a quality evaluation result;
and determining an output target and a panoramic image corresponding to the output target from the multi-frame image containing the target object based on the quality evaluation result, and performing sharing processing to obtain the currently shared panoramic image.
2. The panorama sharing method according to claim 1, wherein a plurality of quality scores of the same target object in the continuous multi-frame images are acquired, and a target object with a highest quality score is selected as an output target.
3. The panorama sharing method according to claim 1, wherein after determining an output target and a panorama corresponding to the output target from a multi-frame image including a target object based on the quality evaluation result, further comprising:
and when two or more than two output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets.
4. The panorama sharing method according to claim 3, wherein when two or more output targets are located in the same panorama, sharing the output targets, and after determining that the panorama is a currently shared panorama of the output targets, further comprises:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the one or more stored shared panoramas, so that the current shared panoramas comprise output targets in the one or more stored shared panoramas.
5. The panorama sharing method according to claim 1, wherein a shooting interval is set, a video image within the shooting interval is acquired, and the consecutive multi-frame image is acquired by performing framing processing on the video image.
6. The panorama sharing method according to claim 5, wherein the target object in the consecutive multi-frame images is detected, and a plurality of the target objects are acquired.
7. The panorama sharing method of claim 6, wherein a plurality of the target objects detected in the video image are tracked, and positions of the plurality of the target objects in the video image are distinctively marked.
8. The panorama sharing method of claim 7, wherein each different target object is assigned a unique code for the distinguishing mark.
9. The panorama sharing method of claim 8, wherein the same unique code is assigned to the same target object in the consecutive multi-frame images after the framing processing.
10. The panorama sharing method according to claim 9, wherein in quality evaluation, a same target object in the consecutive multi-frame images is obtained according to the unique code, and quality scoring is performed on the same target object in each frame image;
and comparing the quality scoring results of the same target object to obtain the target object with the highest quality score.
11. The panorama sharing method of claim 1, wherein the shared panorama is stored in a memory after image compression.
12. The panorama sharing method of claim 11, wherein an image storage time limit is set, and when the image storage time limit is exceeded, an output target and a corresponding panorama in a memory are thrown out.
13. The panorama sharing method of claim 6, wherein the algorithm for detecting the target object comprises: the Yolo algorithm, the SSD algorithm, the fast-RCNN algorithm.
14. The panorama sharing method of claim 7, wherein the algorithm for tracking the target object comprises a SOART algorithm, a V-IOU algorithm.
15. The panorama sharing method of claim 1, wherein the target object comprises a human face, a pedestrian, a vehicle.
16. A panorama sharing system, comprising:
the quality evaluation module is used for acquiring continuous multi-frame images, evaluating the quality of one or more target objects in each frame of image and acquiring a quality evaluation result;
and the shared response module is used for determining an output target and a panoramic image corresponding to the output target from the multi-frame image containing the target object based on the quality evaluation result, and performing shared processing to obtain the currently shared panoramic image.
17. The panorama sharing system of claim 16, wherein the quality evaluation module comprises a quality screening unit configured to obtain a plurality of quality scores of the same target object in the consecutive multi-frame images, and screen out a target object with a highest quality score as an output target.
18. The panorama sharing system according to claim 16, comprising a shared map acquisition module configured to determine, based on the quality evaluation result, an output target and a panorama corresponding to the output target from a plurality of frames of images including a target object, and further comprising:
and when two or more than two output targets are positioned in the same panorama, sharing the output targets, and determining that the panorama is the currently shared panorama of the output targets.
19. The panorama sharing system according to claim 18, comprising a history panorama sharing module configured to perform sharing processing on two or more output targets when the output targets are located in the same panorama, and further comprising, after determining that the panorama is a currently shared panorama of the output targets:
and if one or more shared panoramas are stored in the N frames before the current shared panoramas, sharing the one or more stored shared panoramas, so that the current shared panoramas comprise output targets in the one or more stored shared panoramas.
20. The panorama sharing system of claim 16, comprising a shooting interval setting module configured to set a shooting interval, obtain a video image within the shooting interval, and perform framing processing on the video image to obtain the consecutive multiple frames of images.
21. The panorama sharing system of claim 20, comprising a target detection module configured to detect the target object in the consecutive multi-frame images, and obtain a plurality of the target objects.
22. The panorama sharing system of claim 21, comprising a target tracking module for tracking a plurality of the target objects detected in the video image and differentially marking locations of the plurality of the target objects in the video image.
23. The panorama sharing system of claim 22, comprising a tagging module configured to assign a unique code to each different target object for the distinctive tagging.
24. The panorama sharing system of claim 23, comprising a tag splitting module configured to assign the same unique code to the same target object in the consecutive multi-frame images after framing.
25. The panorama sharing system of claim 24, wherein in performing quality evaluation, a same target object in the consecutive frames of images is obtained according to the unique code, and quality scoring is performed on the same target object in each frame of image;
and comparing the quality scoring results of the same target object to obtain the target object with the highest quality score.
26. The panorama sharing system of claim 16, comprising a cache module configured to store the shared panorama in a memory after image compression.
27. The panorama sharing system of claim 26, comprising a storage time limit setting module for setting an image storage time limit, and when the image storage time limit is exceeded, throwing out an output target and a corresponding panorama in the memory.
28. The panorama sharing system of claim 21, wherein the algorithm to detect the target object comprises: the Yolo algorithm, the SSD algorithm, the fast-RCNN algorithm.
29. The panorama sharing system of claim 22, wherein the algorithm tracking the target object comprises a SOART algorithm, a V-IOU algorithm.
30. The panorama sharing system of claim 16, wherein the target object comprises a human face, a pedestrian, a vehicle.
31. An apparatus, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method recited by one or more of claims 1-15.
32. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method recited by one or more of claims 1-15.
CN202010166253.1A 2020-03-11 2020-03-11 Panorama sharing method, system, device and medium Pending CN111401206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010166253.1A CN111401206A (en) 2020-03-11 2020-03-11 Panorama sharing method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010166253.1A CN111401206A (en) 2020-03-11 2020-03-11 Panorama sharing method, system, device and medium

Publications (1)

Publication Number Publication Date
CN111401206A true CN111401206A (en) 2020-07-10

Family

ID=71430676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010166253.1A Pending CN111401206A (en) 2020-03-11 2020-03-11 Panorama sharing method, system, device and medium

Country Status (1)

Country Link
CN (1) CN111401206A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559887A (en) * 2020-12-25 2021-03-26 北京百度网讯科技有限公司 Method for hooking panoramic image and interest point and method for constructing panoramic image recommendation model
CN113438417A (en) * 2021-06-22 2021-09-24 上海云从汇临人工智能科技有限公司 Method, system, medium and device for capturing object to be identified by video
CN113715826A (en) * 2021-08-31 2021-11-30 深圳市同进视讯技术有限公司 Vehicle driving assisting method and device based on vehicle-mounted real-time monitoring

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110013038A1 (en) * 2009-07-15 2011-01-20 Samsung Electronics Co., Ltd. Apparatus and method for generating image including multiple people
CN106355549A (en) * 2016-09-30 2017-01-25 北京小米移动软件有限公司 Photographing method and equipment
CN106599837A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Face identification method and device based on multi-image input
CN108491822A (en) * 2018-04-02 2018-09-04 杭州高创电子科技有限公司 A kind of Face datection De-weight method based on the limited caching of embedded device
CN109376645A (en) * 2018-10-18 2019-02-22 深圳英飞拓科技股份有限公司 A kind of face image data preferred method, device and terminal device
CN110097001A (en) * 2019-04-30 2019-08-06 恒睿(重庆)人工智能技术研究院有限公司 Generate method, system, equipment and the storage medium of best plurality of human faces image
CN110363799A (en) * 2019-05-27 2019-10-22 浙江工业大学 The human body target tracking method of doing more physical exercises of view-based access control model under man-machine symbiosis environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110013038A1 (en) * 2009-07-15 2011-01-20 Samsung Electronics Co., Ltd. Apparatus and method for generating image including multiple people
CN106355549A (en) * 2016-09-30 2017-01-25 北京小米移动软件有限公司 Photographing method and equipment
CN106599837A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Face identification method and device based on multi-image input
CN108491822A (en) * 2018-04-02 2018-09-04 杭州高创电子科技有限公司 A kind of Face datection De-weight method based on the limited caching of embedded device
CN109376645A (en) * 2018-10-18 2019-02-22 深圳英飞拓科技股份有限公司 A kind of face image data preferred method, device and terminal device
CN110097001A (en) * 2019-04-30 2019-08-06 恒睿(重庆)人工智能技术研究院有限公司 Generate method, system, equipment and the storage medium of best plurality of human faces image
CN110363799A (en) * 2019-05-27 2019-10-22 浙江工业大学 The human body target tracking method of doing more physical exercises of view-based access control model under man-machine symbiosis environment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559887A (en) * 2020-12-25 2021-03-26 北京百度网讯科技有限公司 Method for hooking panoramic image and interest point and method for constructing panoramic image recommendation model
CN112559887B (en) * 2020-12-25 2023-09-05 北京百度网讯科技有限公司 Panorama and interest point hooking method and panorama recommendation model construction method
CN113438417A (en) * 2021-06-22 2021-09-24 上海云从汇临人工智能科技有限公司 Method, system, medium and device for capturing object to be identified by video
CN113715826A (en) * 2021-08-31 2021-11-30 深圳市同进视讯技术有限公司 Vehicle driving assisting method and device based on vehicle-mounted real-time monitoring

Similar Documents

Publication Publication Date Title
CN110610510B (en) Target tracking method and device, electronic equipment and storage medium
CN111401206A (en) Panorama sharing method, system, device and medium
CN108737739B (en) Preview picture acquisition method, preview picture acquisition device and electronic equipment
CN109215037B (en) Target image segmentation method and device and terminal equipment
CN110392232B (en) Real-time data monitoring platform
CN111340848A (en) Object tracking method, system, device and medium for target area
CN114581998A (en) Deployment and control method, system, equipment and medium based on target object association feature fusion
CN112529939A (en) Target track matching method and device, machine readable medium and equipment
CN112149570A (en) Multi-person living body detection method and device, electronic equipment and storage medium
CN110166696B (en) Photographing method, photographing device, terminal equipment and computer-readable storage medium
CN104281258B (en) Transparent display is adjusted using image capture device
CN111291638A (en) Object comparison method, system, equipment and medium
US20190356854A1 (en) Portable electronic device and image capturing method thereof
CN111339943A (en) Object management method, system, platform, equipment and medium
CN111191556A (en) Face recognition method and device and electronic equipment
CN107770487B (en) Feature extraction and optimization method, system and terminal equipment
CN111260697A (en) Target object identification method, system, device and medium
CN112417209A (en) Real-time video annotation method, system, terminal and medium based on browser
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN108062405B (en) Picture classification method and device, storage medium and electronic equipment
CN110751120A (en) Detection method and device and electronic equipment
CN117132515A (en) Image processing method and electronic equipment
CN111818364B (en) Video fusion method, system, device and medium
CN114943872A (en) Training method and device of target detection model, target detection method and device, medium and equipment
CN115734085A (en) Video processing method, terminal and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710

RJ01 Rejection of invention patent application after publication