CN112749290A - Photo display processing method and device and video display processing method and device - Google Patents

Photo display processing method and device and video display processing method and device Download PDF

Info

Publication number
CN112749290A
CN112749290A CN201911045830.5A CN201911045830A CN112749290A CN 112749290 A CN112749290 A CN 112749290A CN 201911045830 A CN201911045830 A CN 201911045830A CN 112749290 A CN112749290 A CN 112749290A
Authority
CN
China
Prior art keywords
user
video
identification information
photo
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911045830.5A
Other languages
Chinese (zh)
Inventor
聂兰龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Qianyan Feifeng Information Technology Co ltd
Original Assignee
Qingdao Qianyan Feifeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Qianyan Feifeng Information Technology Co ltd filed Critical Qingdao Qianyan Feifeng Information Technology Co ltd
Priority to CN201911045830.5A priority Critical patent/CN112749290A/en
Priority to PCT/CN2020/122485 priority patent/WO2021083004A1/en
Publication of CN112749290A publication Critical patent/CN112749290A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a photo display processing method and device and a video display processing method and device. Wherein, the method comprises the following steps: acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying a person from the photograph and identifying information from the person for identifying the person, wherein the identifying information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; and displaying the searched photos to the user. The invention solves the technical problem of efficiency comparison of modes of photographing the user in the preset area and displaying the obtained photo to the user in the prior art.

Description

Photo display processing method and device and video display processing method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a photo display processing method and device and a video display processing method and device.
Background
At present, the shooting service provided by scenic spots is carried out in a semi-manual mode, the generated media information is compared and sorted manually, and then the generated media information is delivered to tourists. The efficiency of picking media information through manual comparison is low, and if the quantity of generated media information exceeds the workload of manual picking, the service effect on tourists is influenced.
In view of the above-mentioned problem in the related art that the method for photographing a user in a predetermined area and displaying the obtained media information (i.e., photos or videos) to the user is inefficient, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a photo display processing method and device and a video display processing method and device, which at least solve the technical problem of efficiency comparison of modes of photographing a user in a preset area and displaying the obtained photo to the user in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a photograph presentation processing method, including: acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition; identifying a person from the photograph and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; and displaying the searched photo to the user.
Optionally, the identifying information for identifying the person from the persons comprises: identifying an attachment on the person and/or a biometric characteristic of the person from the person; using the characteristic information of the attached matter and/or the characteristic information of the biological characteristic as identification information for identifying the person; acquiring the identification information of the user comprises: acquiring attachments of the user and/or biological characteristics of the user, and taking characteristic information corresponding to the biological characteristics as identification information of the user; wherein the attachment comprises at least one of: apparel, accessories, hand held items; the attachment is used for uniquely identifying the person in the predetermined area; the biometric characteristic of the person comprises one of: facial features, body posture features.
Optionally, after obtaining the identification information of the user, finding the corresponding photo according to the identification information of the user includes: searching feature information of one or more persons corresponding to the identification information according to the identification information of the user; and searching the photos of the one or more people according to the characteristic information of the one or more people to be used as the photos corresponding to the identification information of the user.
Optionally, in a case where the attached object on the person includes a white area, correspondingly saving the photograph and the identification information recognized from the photograph includes: adjusting the white balance of the photo according to the white area; and correspondingly storing the adjusted photo and the identification information recognized from the photo.
Optionally, in a case where the at least one capturing device captures a video triggered by a trigger condition, acquiring the picture includes: extracting a predetermined frame from the video as the photograph; and/or displaying the searched photo to the user, and displaying partial content or all content of the video to the user.
Optionally, the predetermined condition is that at least one of the following information is detected to exist in the person in the predetermined area: gesture information, mouth shape information, body shape information.
Optionally, displaying the searched photo to the user includes: sorting part or all of the photos if the number of the found photos exceeds a predetermined number; and displaying part or all of the sequenced photos to the user.
According to another aspect of the embodiments of the present invention, there is provided a video presentation processing method, including: acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; correspondingly storing the video and the identification information identified from the video; acquiring identification information of a user; searching a corresponding video according to the identification information of the user; and displaying the searched video to the user.
According to another aspect of the embodiments of the present invention, there is also provided a photograph display processing apparatus including: the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring photos, the photos are acquired from at least one acquisition device distributed in a preset area, and the at least one acquisition device is triggered to shoot by preset conditions; a first recognition unit configured to recognize a person from the photograph and to recognize identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; the first storage unit is used for correspondingly storing the photo and the identification information recognized from the photo; a second obtaining unit, configured to obtain identification information of a user; the first searching unit is used for searching the corresponding photo according to the identification information of the user; and the first display unit is used for displaying the searched photos to the user.
Optionally, the first identification unit includes: the first identification module is used for identifying attachments on the person and/or biological characteristics of the person from the person; a first determination module for using the characteristic information of the attached object and/or the characteristic information of the biological feature as identification information for identifying the person; the second acquisition unit includes: the second acquisition module is used for acquiring attachments of the user and/or biological characteristics of the user and taking characteristic information corresponding to the biological characteristics as identification information of the user; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the attachment is used for uniquely identifying the person in the predetermined area; the biometric characteristic of the person comprises one of: facial features, body posture features.
Optionally, the first lookup unit includes: the searching module is used for searching the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user after the identification information of the user is obtained; and the second determining module is used for searching the photos of the one or more persons according to the characteristic information of the one or more persons as the photos corresponding to the identification information of the user.
Optionally, the first holding unit comprises: an adjusting module, configured to adjust a white balance of the photo according to a white area when the attachment on the person includes the white area; and the storage module is used for correspondingly storing the adjusted photo and the identification information recognized from the photo.
Optionally, the first obtaining unit includes: the third determination module is used for extracting a preset frame from the video to be used as the photo under the condition that the at least one acquisition device shoots the video under the triggering of a triggering condition; and/or the first display module is used for displaying the searched photos to the user and displaying partial content or all content of the video to the user.
Optionally, the predetermined condition is that at least one of the following information is detected to exist in the person in the predetermined area: gesture information, mouth shape information, body shape information.
Optionally, the first display unit comprises: the sorting module is used for sorting part or all of the photos under the condition that the number of the searched photos exceeds the preset number; and the second display module is used for displaying part or all of the sequenced photos to the user.
According to another aspect of the embodiments of the present invention, there is also provided a video presentation processing apparatus, including: the third acquisition unit is used for acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; a second identification unit configured to identify a person from the video and identify identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; the second storage unit is used for correspondingly storing the video and the identification information identified from the video; a fourth obtaining unit, configured to obtain identification information of a user; the second searching unit is used for searching the corresponding video according to the identification information of the user; and the second display unit is used for displaying the searched video to the user.
According to another aspect of the embodiment of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the photograph presentation processing method and the video presentation processing method described in any one of the above.
According to another aspect of the embodiment of the present invention, there is provided a processor, where the processor is configured to run a program, where the program executes the photo presentation processing method and the video presentation processing method described in any one of the above when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including: a processor; a memory coupled to the processor for providing instructions to the processor for the following processing steps: acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition; identifying a person from the photograph and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; displaying the searched photo to the user; and/or the memory is connected with the processor and used for providing the processor with instructions of the following processing steps: acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition; identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area; correspondingly storing the video and the identification information identified from the video; acquiring identification information of a user; searching a corresponding video according to the identification information of the user; and displaying the searched video to the user.
In the embodiment of the invention, the acquisition of the photos is adopted, wherein the photos are acquired from at least one acquisition device distributed in a preset area, and the at least one acquisition device is triggered by a preset condition to shoot; identifying a person from the photograph and identifying information from the person identifying the person, wherein the identifying information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; the photo display processing method provided by the embodiment of the invention realizes the purpose of automatically displaying the photo of the user in the preset area to the user, simultaneously achieves the technical effect of improving the efficiency of displaying the photo of the user in the preset area to the user, and further solves the technical problem of low efficiency of photographing the user in the preset area and displaying the obtained photo to the user in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram of a photograph presentation processing method according to an embodiment of the present invention;
FIG. 2(a) is a schematic view of an article according to an embodiment of the present invention;
FIG. 2(b) is a schematic view of an alternative article of apparel in accordance with an embodiment of the present invention;
FIG. 2(c) is a schematic view of an alternative article of apparel in accordance with an embodiment of the present invention;
FIG. 3 is a schematic view of a hand held article according to an embodiment of the invention;
FIG. 4(a) is a first schematic diagram of a registration interface according to an embodiment of the invention;
FIG. 4(b) is a second schematic diagram of a registration interface according to an embodiment of the invention;
FIG. 4(c) is a schematic illustration three of a registration interface according to an embodiment of the invention;
FIG. 5(a) is a first schematic diagram of a garment registry according to an embodiment of the present invention;
FIG. 5(b) is a second schematic diagram of garment registration according to an embodiment of the present invention;
FIG. 6(a) is a schematic diagram of a user login interface according to an embodiment of the present invention;
FIG. 6(b) is a schematic diagram of a user registration interface according to an embodiment of the invention;
FIG. 6(c) is a schematic diagram of a user login interface, according to an embodiment of the present invention;
FIG. 7(a) is an interface diagram presented by an individual user according to an embodiment of the invention;
FIG. 7(b) is an interface diagram presented by team users according to an embodiment of the invention;
FIG. 7(c) is an interface diagram for adding user presentations according to an embodiment of the invention;
FIG. 7(d) is an interface diagram selected by a user operation according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of white balancing according to an embodiment of the present invention;
fig. 9(a) shows a schematic diagram of a media service system in an embodiment of the invention;
FIG. 9(b) shows a schematic diagram of an alternative media service system according to an embodiment of the invention;
fig. 10 is a flowchart of a video presentation processing method according to an embodiment of the present invention;
FIG. 11 is a schematic view of a photograph presentation processing apparatus according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a video presentation processing device according to an embodiment of the present invention;
fig. 13 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a photograph presentation processing method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of a photograph presentation processing method according to an embodiment of the present invention, and as shown in fig. 1, the photograph presentation processing method includes the steps of:
step S102, a photo is acquired, wherein the photo is acquired from at least one acquisition device distributed in a preset area, and the at least one acquisition device is triggered to shoot by a preset condition.
Alternatively, the predetermined area here may be a scenic spot. Such as parks, amusement parks, etc. The scenic spot is a place for providing ornamental, study, leisure and entertainment for the public, can be a charged scenic spot, can be a free scenic spot, can be a private scenic spot, can be a public scenic spot, can be a natural landscape, and can be an artificial facility.
Alternatively, the at least one capturing device may be a capturing device disposed in the predetermined area.
It should be noted that, in the embodiment of the present invention, a specific setting position of the at least one collecting apparatus is not specifically limited. The at least one acquisition device may be arranged at an entrance to the scenic spot, illustrated with the predetermined area as the scenic spot.
Step S104 identifies a person from the photograph, and identifies identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
Optionally, the person may be identified from the photo by performing image recognition on an image captured by at least one capturing device by using an image recognition technology to obtain the person in the photo.
Optionally, in order to distinguish the persons in the predetermined area, the identification information of the identified persons is unique in the predetermined area.
And step S106, correspondingly storing the photo and the identification information recognized from the photo.
Optionally, the photo and the identification information recognized from the photo are stored in a corresponding manner, so that the photo can be conveniently displayed to the person corresponding to the identification information in the following process.
Step S108, obtaining the identification information of the user.
Step S110, searching the corresponding photo according to the identification information of the user.
And step S112, displaying the searched photos to the user.
As can be seen from the above, in the embodiment of the present invention, a photo may be acquired, and then a person may be identified from the photo, and identification information for identifying the person may be identified from the person, where the identification information is unique within a predetermined area; correspondingly storing the photo and the identification information recognized from the photo; after the identification information of the user is obtained, the corresponding photo is searched according to the identification information of the user, so that the searched photo is displayed to the user, and the purpose of automatically displaying the photo of the user in the preset area to the user is achieved.
It is easy to note that since the person is identified from the acquired photograph and the identification information of the person is identified from the recognition as the information that the person corresponds to the photograph after the photograph is acquired, the photograph can be saved in correspondence with the identification information identified from the photograph in order to subsequently find the photograph based on the identification information; and then acquiring the identification information of the user, searching the photo corresponding to the identification information of the user based on the identification information of the user, and displaying the photo to the user, so that the aim of automatically displaying the photo of the user in the preset area to the user is fulfilled, and meanwhile, the technical effect of improving the efficiency of displaying the photo of the user in the preset area to the user is achieved.
Therefore, the photo display processing method provided by the embodiment of the invention solves the technical problem of efficiency comparison of modes of photographing the user in the preset area and displaying the obtained photo to the user in the prior art.
In an alternative embodiment, identifying identification information from the persona identifying the persona may include: identifying attachments on the person and/or biological characteristics of the person from the person; using the characteristic information of the attached matter and/or the characteristic information of the biological characteristic as identification information for identifying the person; acquiring the identification information of the user may include: acquiring attachments of a user and/or biological characteristics of the user, and taking characteristic information corresponding to the biological characteristics as identification information of the user; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the attachment is used for uniquely identifying the person in a predetermined area; the biometric characteristic of the person includes one of: facial features, body posture features.
Alternatively, the attachment may be an ornament worn by the user, a garment worn by the user, or the like. Such as a user's clothing, accessories, hand held items. FIG. 2(a) is a schematic view of an ornament according to an embodiment of the present invention, wherein the ornament shown in FIG. 2(a) is a hat; FIG. 2(b) is a schematic view of an alternative article of apparel according to an embodiment of the present invention, wherein the article of apparel shown in FIG. 2(b) is a bracelet; FIG. 2(c) is a schematic view of an alternative ornamental article according to an embodiment of the present invention, wherein the ornamental article shown in FIG. 2(b) is a necklace.
Additionally, fig. 3 is a schematic view of a handheld article according to an embodiment of the invention, wherein a toy pistol is shown in fig. 3.
Since the facial features and the posture features of each person may be different, in the embodiment of the present invention, the identification information may also be biometric information of the person, such as the facial features and the posture features of the person.
In an alternative embodiment, the recognition of the photo may be implemented by an image feature generator, and in the embodiment of the present invention, the image feature generator is designed to extract a feature block from the photo by using a feature recognition technology, and generate the feature according to a preset rule. The image feature generator can generate the identification features through a specific algorithm of special software, can extract identification feature blocks in the image by adopting an identification technology, and can train a feature extraction model through a convolutional neural network to generate the identification features.
According to different purposes, the characteristic recognizer is divided into a recognition characteristic generator and an additional characteristic generator. The identification feature generator can identify and generate identification features, is used for distinguishing and confirming the identity of a tourist (namely a person or a user), and can accurately provide media resources required to be acquired for the tourist; the additional characteristic generator can identify and generate additional characteristics, and provides basis for the tourists to screen media resources and preview and sort.
The method is used for storing the shot photos and the people in the electronic equipment in the preset area so as to facilitate searching and processing. In the embodiment of the invention, the user needs to register before entering the predetermined area.
FIG. 4(a) is a first schematic diagram of a registration interface according to an embodiment of the present invention, FIG. 4(b) is a second schematic diagram of a registration interface according to an embodiment of the present invention, and FIG. 4(c) is a third schematic diagram of a registration interface according to an embodiment of the present invention; as shown in fig. 4(a), the user may make a reservation registration, wherein, when making the reservation registration, a predetermined area reserved by the user (as shown in fig. 4(a), a museum welcomes your arrival |) and a reservation date (as shown in fig. 4(a), your reservation date is: year, month, day) are shown to the user; in addition, as shown in fig. 4(a), the number of reserved persons is also shown, for example, as shown in fig. 4(a), you have reserved for registering 3 persons, and all the image information of the users reserved for registration is stored in the user avatar information base for subsequent user matching and the like.
Fig. 4(b) shows a schematic diagram of the postregistration, and as shown in fig. 4(b), the name of the predetermined area, the date of the postregistration, and the image information of the postregistered person are shown, and the avatar of the postregistered user is also stored in the user avatar library.
A schematic diagram of the post-registration is shown in fig. 4(c), and as shown in fig. 4(c), registration feedback information is sent to the user when the post-registration is submitted.
In the embodiment of the invention, the user registration method comprises the steps of acquiring an image containing identification characteristic information, extracting the identification characteristic information in the image, generating identification characteristics and sending the user identification characteristics to an identification characteristic database.
The registration operation is to determine the identity of the tourist in the service receiving process, and is used for screening and sequencing the image resources and the video resources acquired by the acquisition equipment according to the identity of the tourist and accurately providing media resources required to be acquired for the tourist.
It should be noted that the image containing the identification feature information may be image information acquired by a client acquisition device, for example, the acquisition device of the client terminal device of the scenic spot service counter is used to acquire the identification feature information, the acquisition device of the scenic spot self-service counter is used to acquire the identification feature information, and the acquisition device of the handheld mobile terminal of the visitor is used to acquire the identification feature information.
In addition, in the embodiment of the present invention, the client may be a counter thin client, a touch self-service terminal, a desktop computer, a portable computer, a mobile phone, or a tablet computer, and the terminal may be installed with a client application program, or installed with a client applet under an application platform, or installed with a browser, and access the web page client through the browser. The application client and the web page client are collectively referred to as the client in the embodiments of the present invention, and are not specifically stated below.
The client is in data connection with the service support system, receives and displays the image output of the service support system, and sends a service request to the service support system.
The business support system comprises a feature recognizer and can perform feature recognition on the image. Feature recognition of images is an existing image recognition technique that is currently widely used in other fields, such as the field of video surveillance. The current image identification technology is mature and perfect, and the identification efficiency and the identification accuracy are continuously improved no matter the human face in the image is identified or the object in the image is identified. Especially in recent years, the accuracy of machine recognition reaches or even exceeds the accuracy of manual recognition by applying multilayer deep convolution and pooling technology.
The recognition feature recognizer may recognize wearing information or human body information in an image (i.e., a photograph), or the like.
The wearing information, that is, the attached matter may be a printed pattern such as a chest card pattern or a transfer print sticker which is distributed when the guest registers, and the content of the pattern may be a dot matrix, a stripe, a character mark, an image mark, or the like. For example, a registered visitor with the name "Tom" may be assigned a chest card printed with the character "Tom", a chest card printed with the character "T-005", or a chest card printed with the shape of a cartoon "Tom cat". The printed patterns distributed to tourists need to have uniqueness in a fixed time period, and the printed patterns with the same or similar characteristics cannot be distributed to different users in the same time period, so that the system misjudgment is avoided, and the accuracy of media service provision is influenced. It should be noted that, if the printed pattern is a two-dimensional code, it is not suitable in the actual scene application scene, and first, the aesthetic feeling of the acquired image is affected no matter where the two-dimensional code is worn; secondly, the two-dimensional code is usually used for short-distance identification, and the identification rate of long-distance identification is low; thirdly, the daily passenger flow in scenic spots is limited, and a large-capacity high-density code such as a two-dimensional code is not required.
In addition, the wearing information can be chest badge, shoulder strap, hat, cap badge, clothing sticker, bar code bracelet, chain, pendant, walking stick, hand flag and children hand-held toy accessory, can be distributed to tourists through the registration of the tourists, and can also be used for registering the accessory of the tourists. When registering accessories, two or more accessories with the same or similar characteristics are not allowed to be registered in the same time period, so that the uniqueness of identity identification of tourists is ensured, and the influence on the accuracy of providing media services caused by system misjudgment is avoided. For example, the service counter stores a lot of chestnuts of the twelve Chinese zodiac signs, and a tourist selects the Chinese zodiac chapter of "cattle" when registering, so that the Chinese zodiac chapter of "cattle" with the same pattern cannot be distributed to other tourists before the tourist is not finished with the service, the uniqueness of the identification is ensured, and the system misjudgment is avoided.
Furthermore, the scenic spot media service provider can customize a large number of printed patterns or accessories in advance, perform feature identification on the printed patterns and the accessories in advance to generate identification features, perform grouping numbering on the identification features of the printed patterns or the accessories, and store the identification features in an identification feature database of the server to establish a corresponding relationship with corresponding numbers. When the printed patterns or accessories are distributed to tourists, the data association of the user information and the identification features of the printed patterns or the identification features of the accessories can be realized only by manually inputting numbers or inputting codes through scanning at a client, and the registration or login operation of the tourists in the scenic spot service process is realized.
Finally, the wearing information can be information of clothes worn by the tourist and used as identification information of the tourist, such as matching characteristics of a hat, a jacket, trousers or a coat and used as identification characteristics. The problem of shirt collision exists in the practical scenic spot application of visitor clothing, and especially group visitors who unify clothing do not have the possibility of distinguishing the identity of the visitor from clothing information. But the clothing information can be applied to the clients of the movie building. For example, a building customer may need to organize a wedding photo to go to a scene area for viewing, and the building customer may pre-collect and register wedding and dress in his/her store and assign the pre-collected images to different newsfolds. In an actual application scenario, even if the wedding dresses and the dresses distributed to different newborns and couples by the studio clients are completely the same, so that the degree of the crash is very serious, and the individual users cannot be distinguished specifically through the feature identifiers, the original media file data is generally acquired as the whole by the studio clients and then subjected to subsequent classification processing, and the actual requirements of the studio clients are not influenced due to the imperfect and accurate service caused by the crash.
Fig. 5(a) is a first schematic diagram of clothes registration according to an embodiment of the present invention, and as shown in fig. 5(a), the registered clothes can be used as identification information and stored in a clothes identification library. In addition, as shown in fig. 5, the registration date and the historical scene points of other users are also shown. Also shown in this figure is a library of ornament identification.
Fig. 5(b) is a second schematic diagram of the clothes registration according to the embodiment of the invention, as shown in fig. 5(b), when the clothes selected by the user is the same as or close to the clothes selected by other people, the user is prompted that the clothes cannot be used as the identification information.
For the above-mentioned biological features, which may be human faces, the recognition of human faces is well-established in the prior art and has been applied in a large scale, and detailed details of the technology need not be redundantly described here. The service provider can also store the face information of the user in the cloud server, and tourists can directly extract the face information from the cloud when registering and inputting the registered account number without acquiring the face information on site. The identity of the tourist can be distinguished through face recognition, so that the service flow can be reduced, and the convenience of the tourist for receiving service is improved. In the practical application scene of the scenic spot service, the face recognition has the practical problems that the face feature similarity of the twins is high and the individual identities cannot be distinguished, but the twins appearing in the scenic spot are generally served in a family group client mode, so the problem can be ignored. The use of human identity information, particularly human face information, may involve personal privacy laws in certain regions, and the identification cannot be allowed.
Since all the identification features are stored in the identification feature database, when one item of user information is registered, a plurality of identification features can be associated at the same time. For example, when a certain family group registers in a scenic spot, only one piece of user information needs to be registered, all family members need to collect identification feature information, and a plurality of identification features correspond to the same registered user information; for another example, a certain tourist has a characteristic requirement for service, and in a scenic spot, in addition to acquiring front images, it is also desirable to acquire media services with dimensions of side surfaces, back surfaces, and the like, and a customer service person needs to acquire different pieces of identification characteristic information on the front surface, two side surfaces, and back surface, and these identification characteristics may correspond to one piece of registered user information.
It should be noted that the user logs in the media service system to select a desired media file and operate according to more business requirements at the client. The method comprises the steps of receiving a user login request of a client, receiving event information, generating an event identification feature by an identification feature generator, matching the event identification feature with an identification feature database, generating an identification feature matching result, and determining a login user by the identification feature matching result. The event information may be image information acquired by the client acquisition component, may be an identification feature code entered by the client, and may be a user code entered by the client.
Fig. 6(a) is a schematic diagram of a user login interface according to an embodiment of the present invention, as shown in fig. 6(a), the login interface displays names of predetermined areas, login categories (e.g., individual user, family user, team user, post-mend user, member user), wherein when the user logs in, the user is further identified, for example, a badge worn by the user can be identified, and when the identification is successful, the badge is displayed, the badge is successfully identified, and the identification code is displayed.
Fig. 6(b) is a schematic diagram of a user registration interface according to an embodiment of the present invention, and as shown in fig. 6(b), the user registration interface may be implemented by face entry or by face entry.
Fig. 6(c) is a schematic diagram of a user login interface according to an embodiment of the present invention, in which a user may select to enter registration or to view a predetermined area, as shown in fig. 6 (c).
FIG. 7(a) is an interface diagram presented by an individual user according to an embodiment of the present invention, as shown in FIG. 7(a), showing the number of current users, the number of chest cards of the users, the package of parts selected by the users, the photos selected by the users, the videos selected by the users, and the like.
FIG. 7(b) is an interface diagram presented by team users according to an embodiment of the invention, as shown in FIG. 7(b), showing the number of current users, the package of user-selected pieces, the photo of user-selected photos, the video of user-selected videos, etc.
Fig. 7(c) is an interface diagram for adding a user presentation according to an embodiment of the present invention, as shown in fig. 7(c), showing the type of the current user, and prompting the added user to place the identification object in the camera capturing area for recognition.
Fig. 7(d) is an interface diagram selected by a user operation according to an embodiment of the present invention, and as shown in fig. 7(d), the user may select an operation required by the user, that is, the user may select a type of a photo, for example, an electronic photo, a paper photo, a music album, a travel picture album, and the like.
The method for the user to log in the media service system also comprises a manual input mode or a mode that the mobile terminal scans the two-dimensional code and the like, and under the actual application scene of a scenic spot, the image acquisition device of the client is used for acquiring and identifying characteristic information to log in the user, so that better service experience can be brought to tourists.
The identification characteristic article is pre-coded, and information is coded when the identification characteristic article is registered or logged in, the basic spirit of the identification characteristic article is also to use an image characteristic identification technology for registration or logging in, the pre-coding process is to firstly identify the image characteristic of the identification characteristic article, and establish a corresponding relation between the generated identification characteristic and the pre-coding.
The method is characterized in that an account is preset through a network for registration or login operation, the basic spirit is also to use an image feature recognition technology for registration or login operation, image feature recognition needs to be carried out on recognition features such as human faces in the registered or logged-in account, recognition features are generated and stored in the network, and a corresponding relation is established between the recognition features and the account.
The image data and the video data acquired by the acquisition equipment need to be classified and screened through features, and a sequencing display interface which is satisfied by tourists can be presented.
According to the above embodiment of the present invention, in step S110, after acquiring the identification information of the user, searching for the corresponding photo according to the identification information of the user may include: searching the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user; and searching the photos of the one or more persons according to the characteristic information of the one or more persons as the photos corresponding to the identification information of the user.
In an alternative embodiment, in the case where the sticker on the person includes a white area, the storing the photograph and the identification information recognized from the photograph in correspondence may include: adjusting the white balance of the photo according to the white area; and correspondingly storing the adjusted photo and the identification information recognized from the photo.
In the embodiment of the invention, the person of the characteristic subject is identified, and the shooting and collecting system can perform white balance adjustment and focusing operation on the characteristic subject. The white balance adjustment information and the focus information for the recognition feature subject person may be stored in the image index database as additional feature information.
In the embodiment of the invention, the white balance adjustment of the image is performed based on the color temperature of the white block in the image, and the same white block has different color temperature differences under different illumination conditions in an actual scene, so that the actually produced image has a color cast phenomenon, and the captured and collected color of the main character is not real. In the professional model shooting field, a model usually holds a white balance sample plate for shooting, and the subsequent white balance adjustment operation is carried out through white block color temperature information in the white balance sample plate to restore a real color image. If the white block sample plate is fixedly arranged in the shooting and collecting area of the scenic spot in advance, the difference of the spectral characteristics of the illumination of the white block sample plate area and the illumination of the main area of the shot person can also occur in the actual application process, and the color restoration of the shot person is also influenced.
In addition, the embodiment of the invention also provides a method for providing white balance adjustment in the scenic spot media service, and particularly, a user receives a printed image identifier or an accessory identifier at a scenic spot service counter during pre-registration, and the printed image identifier or the accessory identifier is provided with a white block area in the manufacturing process. And the shooting acquisition device extracts the color temperature value of the white block area in the identifier after acquiring the image, and adjusts the white balance of the image by taking the color temperature value of the white block area as a white balance adjustment reference.
For example, when the user registers and receives the print image identifier or the accessory identifier in advance, the color temperature value of the white block area is matched according to the skin color of the user, so that the acquired image presents the optimal imaging effect of the user. For example, a skin color sample is made, and white blocks with different color temperatures are selected corresponding to different skin colors in the skin color sample. The method comprises the steps that a user with darker skin color is allocated with a white block marker with darker color temperature when the user is registered in advance, after a shooting and collecting device obtains an image of the user, white balance adjustment is carried out by taking the collected color temperature of the white block as a reference, the adjusted image can show the effect of whitening the skin color of the user, the consumption requirement of the user is met to the maximum extent, and service experience is improved.
In addition, when a plurality of white block regions exist in the same acquired picture, the white balance processing unit needs to perform white balance adjustment once for each white block region, and store the image separately. For example, a tourist team takes a group photo image, and it is obvious that a white balance adjustment scheme cannot provide excellent presentation effect for each team member in the team due to the inconsistent skin color of each team member in the team. According to the scheme, each team member in the team pre-allocates and wears the white block identification matched with the skin color, optimal white balance adjustment is generated according to each white block identification after the images are collected, the images are generated respectively, and the image optimization requirements of each team member are met.
By using skin tone samples, more service functions can be implemented. For example, when the face information is used as a recognition scheme for recognizing the feature information, the skin color value may be collected to set the white balance adjustment parameter. The method for acquiring the skin color value can record the skin color value of the face of the visitor into the support system in advance by comparing the skin color sample, the support system can estimate according to the obtained skin color temperature value of the face of the visitor in the image and the recorded skin color parameter of the visitor to obtain the color temperature value of the virtual white balance white block, and can adjust the white balance by taking the obtained color temperature value of the virtual white balance white block as a reference. By comparing the skin color sample, the skin color parameters of the face of the support system are input in advance, and the support system can also perform accurate beautifying operation according to the skin color parameter values, for example, the skin color of a white race which likes wheat, and the support system can automatically generate an imaging effect with darker skin color for the white race according to the skin color information input in advance.
Fig. 8 is a schematic diagram of white balance according to an embodiment of the present invention, as shown in fig. 8, a skin color comparison color chart is shown in the figure, through which white balance for a user can be achieved, and the white balance is also illustrated in the figure, for example, a user can select a white balance white block color chart by referring to the skin color comparison color chart, and insert a white block into a reserved area of a chest card, so that a more excellent art work can be presented for the user.
In the actual application process, images can be obtained in a continuous shooting mode. In the process of continuous shooting, accurate focusing on different figure positions in the acquisition area can be realized by controlling the focusing control unit, and the shooting effect is improved. And in the continuous shooting focusing step, the identification characteristic information in the image is detected, and the focusing control unit takes the identification characteristic information area as a focus to shoot and collect the subsequent image.
If a plurality of identification feature information areas are detected in the viewing area, it is preferable to perform focus shooting with the unfocused shooting area as the focus. The focusing position is changed by detecting and identifying the characteristic area, and a plurality of people in the view area are respectively focused and shot, so that the service requirements of different tourists are met.
If a plurality of identification feature information areas are detected in the view area, the identification feature information areas which can be matched with the features in the identification identifier database are preferentially selected as focuses to carry out focusing shooting, focusing shooting is carried out only for pre-registered clients, and resource utilization of a focusing control unit is optimized.
Optionally, in a case where the at least one capturing device takes a video triggered by the trigger condition, the obtaining the picture includes: extracting a predetermined frame from the video as a photograph; and/or displaying the searched photos to the user, and displaying partial content or all content of the video to the user.
In the embodiment of the present invention, there are three photo generation methods, which are respectively: the method comprises the steps of generating images through snapshot, generating images through continuous shooting and extracting images from video frames, wherein the images from the video frames have two extraction modes of extracting from video streams and extracting according to video indexes. After the images are obtained, the generated images are identified by the identification characteristic information and the additional characteristic information, the generated identification characteristics and the additional characteristics are stored in the image index database, and a user can accurately obtain the required images by searching the image index database.
The step of generating the image and the image index is to acquire the image, extract the identification feature information in the image, generate the identification feature, store the image, and send the identification feature of the image to the image index database.
In practical application scenarios, a desired image can be extracted from a real-time video stream, for example, a video stream generated by a scene monitoring camera. The image is extracted from the video stream, the image needs to be subjected to feature recognition, and the recognition features capable of distinguishing the users and the additional features capable of providing bases for displaying, screening and sequencing are respectively extracted.
The additional feature information here may be image capturing device identification, image capturing time, facial expression information of a person, gesture information, position information, eye information, defocus power information, sharpness information, focus information, and white balance sampling information.
For example, under the actual shooting scene of a scenic spot, the camera shooting and collecting device is fixed, and the camera shooting and collecting angle and the camera shooting and collecting area are determined, and in the collecting area, the part of key areas of tourists in the camera shooting and collecting area can obtain excellent shooting effect. The key area is calibrated manually, and the tourists entering the key area are subjected to additional characteristic information identification in the aspects of facial expression information, gesture information, position information, eye information, defocus information, definition information and the like of the characters. All the people in the key area in the picture are subjected to feature acquisition, so that the system can conveniently retrieve all the photos of a certain user, recommend sequencing and display.
For example, in an actual shooting scene of a scenic spot, only registered users are subjected to additional feature recognition, matching of recognition features is performed on a main person before the additional feature recognition, and no pre-registered user is subjected to additional feature collection, so that the system load of an additional feature recognizer is reduced.
For the above video, in the embodiment of the present invention, a method for generating a video index is further provided: the method comprises the steps of continuously acquiring video frames, capturing time node information of the video frames, generating time characteristics, capturing identification characteristic information in the video frames, generating identification characteristics, sending the time characteristics and the identification characteristics of the video frames to a video index database, and determining the corresponding relation between the time characteristics and the identification characteristics in the index database. Capturing additional characteristic information in the extracted frame image, generating additional characteristics of the extracted frame image, sending the additional characteristics of the video frame to a video index database, and determining the corresponding relation between the time characteristics and the additional characteristics in the index database. The content of the additional feature part of the frame in the video is the same as the additional feature of the image, and is not described herein again.
According to the arrangement position of the video index generation device, the video index generation device can be divided into a server arranged at the acquisition end and a server arranged at the support system end. When the video stream is arranged on the acquisition end server, the video stream acquired by the acquisition end is transmitted to the acquisition end server, and the acquisition end server firstly identifies the characteristics of the video stream through the video index generating device and then stores the video file. The acquisition end server can be arranged in the camera acquisition equipment and can be arranged in the camera acquisition equipment controller in practical application. When the video index generation device supports the system-side server, the video index generation device extracts the stored video files and then performs feature recognition.
The invention provides a method for extracting a video according to a video index, which specifically comprises the steps of receiving a video extraction request and corresponding identification characteristics, retrieving a video index database according to the identification characteristics, acquiring video frame time node information containing the identification characteristics, and extracting a video segment in a corresponding video file according to the acquired video frame time node information and video extraction rules.
The identification feature may be one identification feature or a group of identification features. For example, a certain tourist group is a tourist group, and each team member only needs its own video clip, and the video clip is set to be extracted according to an identification feature; for another example, the tourist group is a group of members belonging to the same organization, and all members are required in a video segment required by the organization.
In the embodiment of the present invention, the video extraction rule may include the following, and of course, may also include other types of extraction rules.
The video extraction rule comprises a rule for setting the length of a fixed time interval when the video extraction starting frame is positioned at the first occurrence of the identification feature. In order to extract the video clip to present the optimal effect, the initial frame of the video extraction is set to be a fixed time interval length before the first appearance of the identification feature, and the video clip is extracted from the time when the main character of the identification feature does not appear; the initial frame of video extraction is set at a fixed time interval length after the first appearance of the identification features, and video clips are extracted from the time when the main character of the identification features appears at a better position in a view area; the initial frame of video extraction is set to be the first time when the identification feature appears for the first time, and the video segment is extracted from the first time when the main character of the identification feature is identified. The determination of the video extraction initial frame needs to adopt different settings according to different view point positions and adopt different settings according to different presentation effect requirements.
And secondly, the video extraction rule comprises a rule for setting the video extraction end frame to be positioned at the last appearance of the identification feature for a fixed time interval length. In order to extract the video clip to present the optimal effect, the end frame of video extraction is set at the fixed time interval length after the last appearance of the identification feature, and the video clip is extracted after the main character of the identification feature disappears; setting the end frame of video extraction at a fixed time interval length before the last appearance of the identification feature, and ending the extraction of the video clip when the main character of the identification feature leaves a better position of the viewing area; the video extraction ending frame is set to be the last time when the recognition feature appears last and the recognition feature subject character is recognized, and the video segment is extracted. The determination of the video extraction end frame needs to adopt different settings according to different viewing point positions and adopt different settings according to different presentation effect requirements.
And thirdly, the video extraction rule also comprises a rule for setting the length of a fixed time interval after the video extraction ending frame is positioned after the video extraction starting frame. By the rule, short videos with fixed time length can be generated, and system management and price-confirming accounting are facilitated to be simplified.
And fourthly, the video extraction rule comprises a frame skipping extraction rule. Through the rule, the video clip can be presented in a fast forward mode, the video time length is shortened, and the artistic effect of the video clip is increased.
And fifthly, the video extraction rule comprises a repeated extraction rule. By the rule, the wonderful segment repeated presentation can be realized, the slow-down action presentation can also be realized, and the artistic effect of the video segment is increased.
And sixthly, the video extraction rule comprises a rule for extracting a plurality of areas corresponding to the video frame identification features and splicing the areas into a video frame image. By the rule, a plurality of frame image pictures containing characteristic main characters are spliced in one frame image picture, and the artistic effect presented by the video clip is improved.
The video extraction rules include rules for extracting video frame images across a plurality of video files. Through the rule, the video clips of different scenery-taking points can be integrated into one video clip, and the presented artistic effect is improved.
In addition, the embodiment of the invention also provides a method for extracting images through video indexes, and specifically, the method comprises the steps of receiving an image extraction request and corresponding identification features, searching a video index database according to the identification features, acquiring video frame time node information containing the identification features, and extracting frame images in corresponding video files according to the acquired video frame time node information.
Optionally, the predetermined condition is that at least one of the following information is detected to exist in the person in the predetermined area: gesture information, mouth shape information, body shape information.
In the actual application process, the image can be acquired in a snapshot manner. The embodiment of the invention also provides a method for triggering the snapshot, and specifically, the trigger signal is generated by detecting the trigger characteristic information in the video frame of the video stream by the trigger signal generator, and the trigger signal generation and the snapshot starting setting have a delay time. The triggering characteristic information is at least one of character gesture information, mouth shape information and body shape information in the key area. For example, the scenic spot service provider defines the trigger feature in advance, and the trigger feature may be an OK gesture, a ring finger gesture, a clap hand gesture, an OK pronunciation mouth shape, a standard standing posture shape, and the like. The tourist makes an OK gesture in the shooting and collecting area, the trigger signal generator detects and generates a trigger signal, and after a fixed delay time, the shooting and collecting device starts a snapshot operation.
Specifically, after the trigger signal is generated, trigger characteristic information in the image is detected, and the focus control unit takes the trigger characteristic information area as a focus to perform focus snapshot, so that accurate focus shooting of the trigger main body is realized.
In addition, the light supplementing unit is started within the set delay time after the trigger signal is generated, so that the illumination compensation of the specific position of the viewing area is realized, and a more excellent image effect is presented.
According to the above embodiment of the present invention, displaying the searched photo to the user may include: under the condition that the number of the searched photos exceeds the preset number, sequencing part or all of the photos; and displaying part or all of the sorted photos to the user.
The embodiment of the invention also provides a method for extracting the displayed photos in the scenic spot according to the image index, and specifically, the method comprises the steps of obtaining the image retrieval request and the user information, inquiring all identification features corresponding to the user information in the identification feature database, retrieving the image index database according to the identification features, generating image index data, extracting corresponding images according to the image index data, and sending the extracted images to the display unit.
After a client logs in user information, all identification features corresponding to a user in a feature database are inquired, identified and extracted, an image index database is retrieved according to the identification features, corresponding images are extracted according to image indexes generated by the image index database, an image display ordering sequence is generated according to additional features and a preset display ordering rule, and an image display unit performs ordering display on the extracted images according to the image display ordering sequence. The image displayed by the image can be a low-quality slightly-reduced image, and the condition that a user obtains a medium-quality image at a network client in a screen capturing mode is avoided.
The display ordering rule may be various, and is described in detail below.
And the display ordering rule comprises a rule of ordering in sequence according to the acquisition equipment. Different collection devices represent different scene collection points, and sequencing is carried out according to the popularity and heat degree of the scene collection points, so that a better presentation effect can be brought to tourists.
And secondly, displaying the ordering rule, wherein the ordering rule comprises the rule of ordering according to the service network points. Images collected in different scenic spots are uniformly stored on the cloud server and are sorted according to service network points, and cross-scenic-spot service can be brought to passengers.
The display ordering rule may include a rule that each acquisition device preferentially displays at most N items. The tourists can generate a very image on the same acquisition equipment, and the whole image is displayed, so that the tourists have difficulty in selecting and the service experience is influenced. And a rule that N priority images are presented at most on each acquisition device is set, so that options are reduced, the selection efficiency of tourists is improved, and the service experience is improved.
And fourthly, displaying the sorting rule to contain the rule that different identification features are sequentially sorted. Team tourists need to provide the opportunity of image display for each team member, and the problem of selection omission of team users is reduced.
And fifthly, displaying the ordering rule to include a rule for optimizing the priority of the processed synthetic image. The images are optimized and processed through various image processing templates to generate a better presenting effect, for example, the image presenting effect can be increased by increasing the text foreground, the pattern foreground, the expression foreground, the virtual background, the background replacement, the multi-image splicing, using an art filter and the like, and higher service experience is brought to tourists.
And displaying the sorting rule to contain the rule of sorting according to the definition. The images with good imaging effect and high definition are prioritized, and service experience is improved.
And the display sorting rule comprises a rule for preferentially displaying the eye state of the characteristic subject person in an open state, or a rule for preferentially displaying the eye state of the characteristic subject in the largest number under the condition of identifying the open state of the eyes of the same login user. In actual scenic spot application, the same image may contain many people, the eye opening and closing states are different, only the eye state of the main characteristic identification person is screened out when the image is displayed to a user, if the image is a team user, the image with the largest number of eye opening states of all the main characteristic identification persons under the login user is displayed, and the eye closing state image of the main characteristic identification person is screened out.
And displaying the ordering rule to comprise a rule that the position state of the characteristic subject character is the preset optimal position state and is displayed in sequence at the second best position, or a rule that the position state of each identification feature subject character under the same login user is the preset optimal position state and is displayed in sequence at the second best position. The same viewing area can be divided into different blocks, the shot main characters have different effects when being shot at different positions, different blocks are arranged in a priority sequence, the main characters are sequenced according to the priority positions of the shot main characters, the guest selection efficiency is improved, and the service experience is improved.
And displaying the feature subject character expression richness in sequence according to the feature subject character expression richness, or displaying the feature subject character expression richness in sequence according to each recognition feature subject character expression richness under the same landing user. The expression abundance of the shot characteristic subject characters serves as a sorting reference basis, stiff images of the expressions can be screened out, the wonderful moment with rich facial expressions is presented preferentially, and service experience is improved.
And displaying the sequencing rule preferentially according to the focusing accuracy of the characteristic subject characters, or sequentially displaying the rules according to the focusing accuracy of each identification characteristic subject character under the same login user. Images with high focusing degree in characteristic subject character areas are prioritized, and service experience is improved.
And eleven, showing the sorting rule, wherein the rule comprises a rule that the white balance sampling area is preferentially shown in the characteristic subject person area. The image priority ordering of white balance adjustment is carried out by the white block sampling area worn by the characteristic main person, the highest quality image data is preferentially provided for the tourists, and the service experience is improved.
And realizing the combined display ordering of the multiple display ordering rules through the combination of the display ordering rules. Through various sequencing rules screening sequencing, the photos that the top-most beautiful and the tourists need are preferentially displayed, the selection efficiency of the tourists is improved, and the service experience of the tourists is improved.
And setting a comprehensive grading model by setting a grading standard of the grading items, wherein the display ordering rule comprises a rule displayed according to the grading score. Through the scoring method, the most elegant photos required by the tourists are preferentially displayed, the selection efficiency of the tourists is improved, and the service experience of the tourists is improved.
For example, a separate scoring criterion is established for the acquisition device, the image synthesis template, the definition, the eye state of the characteristic subject character, the expression abundance of the characteristic subject character, the focusing accuracy of the characteristic subject character and the white balance state of the characteristic subject character, and a calculation weight value is set. As shown in table 1 below:
TABLE 1
Figure BDA0002254099940000191
For example, a visitor acquires an image a and an image B just before a scenic spot, and the scores of the acquisition devices of the image a and the image B are both 70; and C images and D images are acquired before the scenic spot landmark buildings, and the scores of acquisition equipment of the C images and the D images are both 100. In the post beautification processing process of the image, the beautification synthesis module is not used for the image A, and the score of the image synthesis template is 60; b, using an image synthesis template with a score of 70 for the image; image C uses an image synthesis template with a score of 80; the D image uses an image synthesis template with a score of 100. Under the characteristic identification judgment, the definition score of the image A is 60, the eye state score is 0, the expression richness score is 80, the focusing accuracy score is 70, and the white balance state score is 50. The clarity score 80, eye state score 100, expression richness score 70, focus accuracy score 100, and white balance state score 100 for image B. The clarity score 80, eye state score 100, expression richness score 90, focus accuracy score 90, and white balance state score 100 for the C image. D, the definition score of the image is 100, the eye state score is 100, the expression richness score is 50, the focusing accuracy score is 100, and the white balance state score is 100. The specific scores are shown in table 2 below:
TABLE 2
Item A image B picture C image D image Weight of
Collection equipment 70 70 100 100 1
Image synthesis template 60 70 80 100 2
Definition of 60 80 80 100 1
Characteristic subject person eye state 0 100 100 100 5
Feature subject character expression richness 70 70 90 50 0.5
Feature subject character focus accuracy 80 100 90 100 0.5
Characteristic subject character white balance state 50 100 100 100 1
Setting each item of score multiplied by the weighted value and then taking the average number as a comprehensive score calculation model, wherein the scoring results of the four images are respectively as follows:
a image score (70 × 1+60 × 2+60 × 1+0 × 5+70 × 0.5+80 × 0.5+50 × 1)/7 ≈ 53
B image score (70 × 1+70 × 2+80 × 1+100 × 5+70 × 0.5+100 × 1)/7 ≈ 139
C image score (100 × 1+80 × 2+80 × 1+100 × 5+90 × 0.5+100 × 1)/7 ≈ 147
D image score (100 × 1+100 × 2+100 × 1+100 × 5+50 × 0.5+100 × 1)/7 ≈ 154
The display method comprises a method for displaying the image scoring result. And in the same way of displaying the images, the system marking the images scores, and a selection basis is added for the tourists when the tourists select the images, so that the service experience of the tourists is improved.
It should be noted that, in the embodiment of the present invention, a method for extracting and displaying a video in a scenic spot according to a video index is further provided, and specifically, a video retrieval request and user information are obtained, all identification features corresponding to the user information in an identification feature database are queried, the video index database is retrieved according to the identification features, video index data is generated, a corresponding video clip is extracted from a video according to image index data, and an extracted video cover is sent to a display unit. The video cover may be a static cover or a dynamic cover. The static cover is a frame image extracted from the video clip, and the dynamic cover is a preview video extracted from the video clip.
In the embodiment of the present invention, the presentation ordering rule may include a plurality of rules, which are described in detail below.
And the display ordering rule comprises a rule of ordering in sequence according to the acquisition equipment. Different collection devices represent different scene collection points, and sequencing is carried out according to the popularity and the heat of the scene collection areas, so that a better presentation effect can be brought to tourists.
And secondly, displaying the sorting rules, wherein the sorting rules comprise rules arranged in groups according to the acquisition equipment. The collection devices are grouped according to the scenic spot areas and arranged according to the groups, so that tourists can conveniently preview according to the groups.
And thirdly, displaying the ordering rule, wherein the ordering rule comprises the rule of ordering according to the service network points. Videos collected in different scenic spots are uniformly stored on a cloud server and are sorted according to service network points, so that better service experience can be brought to passengers.
And fourthly, displaying the ordering rule to comprise a video priority ordering rule of the post-synthesis processing. The video is optimized and processed through various video processing templates to generate a better presenting effect, for example, the text foreground, the pattern foreground, the expression foreground, the virtual background, the background replacement, the multi-picture splicing transition, the use of an art filter, the enhancement of a special effect and the like are increased, the presenting effect of the video can be enhanced through the later-stage synthesis processing, and higher service experience is brought to tourists.
And fifthly, the display ordering rule comprises a video priority ordering rule extracted by frame skipping, a video priority ordering rule extracted repeatedly and a video splicing priority ordering rule. Through the rule, the video with better presentation effect can be presented preferentially, and the service experience of the tourists is improved.
The display ordering rule may include a rule that each acquisition device preferentially displays at most N items. The tourists can generate a plurality of videos on the same acquisition device, and the videos are all displayed to cause difficulty in selection of the tourists and influence service experience. And setting a rule that N top priority videos are presented at most on each acquisition device, reducing options, improving the selection efficiency of tourists and improving the service experience.
And setting a comprehensive grading model by setting a grading standard of the grading items, wherein the display ordering rule comprises a rule displayed according to the grading score. Through the scoring method, the most elegant videos required by the tourists are preferentially displayed, the selection efficiency of the tourists is improved, and the service experience of the tourists is improved. Implementations may be presented with reference to image scoring.
The display method comprises a method for displaying the video scoring result. And in the same way of displaying the video cover, marking the system score of the video, and adding one more selection basis for the tourist when selecting the image, thereby improving the service experience of the tourist.
The charging method of the media resource service adopts a charging mode according to the service content in order to meet the characteristics of the scene shooting service. And different charging rules are customized according to the flexibility and the changeability of the charging requirements. That is, the invention can customize different charging formulas according to the charging rules, realize flexible combination of charging strategy, package strategy, discount strategy, exemption strategy, addition strategy, etc., and is easy to expand.
In the charging method of the media resource service in the embodiment of the invention, one or more charging rules are customized according to different charging rules of the media service type to form a charging rule pool. When customizing the charging rule, the following factors are referred to: A. the video and the photo which are shot have different charging strategies; B. different shooting cameras and different shooting positions have different charging strategies; C. the media provided by the service provider and the media provided by the user self-timer have different charging strategies; D. the registered member user and the registered user have different charging strategies; E. different member levels have different charging strategies; F. the pre-registered user and the subsequent registration user have different charging strategies; G. different charging strategies are provided for single user combination, double user combination, family user combination and group user combination; H. different user introduction channels have different charging strategies; I. supporting single charging (including single video, single photo and single electronic photo album); J. different obtaining modes comprise different charging strategies (such as a photo printing mode, a trip log album customizing mode, a copying mode and a network sharing mode); K. different charging strategies (such as beautifying, adding filters and synthesizing music albums) are processed at different later stages; l, different charging strategies exist in different acquisition periods (for example, in comparison with the acquisition in the daytime at night, extra means such as illumination needs to be provided); m, supporting package charging strategies (such as free packages, 5 yuan packages, 10 yuan packages, 50 yuan packages and 100 yuan packages), wherein the packages can contain specific service items within a fixed number, and for example, the 100 yuan packages contain service items such as X photos, Y sections of small videos, a synthetic electronic photo album, a presentation tour photo album and the like; the package can contain the maximum aggregate amount, for example, a 100 yuan package contains service items with the aggregate total cost of photos and videos less than 200 yuan; n, supporting the personalized charging of users, and different users can select different tariffs or charging strategies according to the conditions of the users; o, supporting capping cost; p, supporting an advertisement exemption charging strategy, for example, when a music album is shared in the network, a certain advertisement exemption charging is given to a built-in scenic spot advertisement; q, supporting an integral deduction strategy, and charging the integral obtained after the user consumes in different scenic spots by deduction; r, supporting the activity discount strategy, and setting a corresponding exemption range during the promotion activity; s, supporting a reward discount strategy, wherein the self-shot photo or video provided by the user has a particularly excellent effect and gives reward exemption charging; t, supporting the user who does not settle to apply for a combined settlement charging strategy of the media resource information historical list; u, discount and exemption charging strategies supporting fixed times of one-time consumption (for example, one-time consumption can only be subjected to one-time discount charging or one-time exemption charging); v, supporting a cross-scenic spot charging strategy, and enabling a user to perform charging settlement services of other scenic spots in one scenic spot; and W, supporting to obtain an unsettled application information media resource information historical list on the cloud server, and merging the charging and cost-effective services.
In the embodiment of the invention, a plurality of charging rules are customized according to the factors in the concrete implementation. Charging for a service may be implemented by one or more charging rules, which may be combined charging.
For example: the basic charging rule applied to the photographing service business of the scenic spot defines a charging rule in units of a minimum charging factor of a single resource specification, which may be exemplified as table 3:
TABLE 3
Figure BDA0002254099940000221
The package charging rule applied to the shooting service business of the scenic spot defines a rule for charging in units of package resource specifications, and the package charging rule contains a fixed number of photos and a fixed number of videos, which can be exemplified in table 4:
TABLE 4
Figure BDA0002254099940000231
The package charging rules include charging rules for the maximum aggregate cost of media resources, which may be exemplified as shown in table 5:
TABLE 5
Item Charging mode Service content Amount of charge
9.9 yuan package Fixed charging Basic charge accumulation 20 yuan 990 (fen)
…… …… …… ……
88 yuan package Fixed charging Basic charge accumulation 200 yuan 8800 (fen)
99 yuan package meal Fixed charging Basic charge accumulation 300 yuan 9900 (minutes)
The coefficient charging rules define charging rules of the coefficient accounting method of the basic charging rules and the package charging rules, and the coefficient charging rules include discount coefficient charging rules and addition coefficient charging rules, and the discount coefficient charging rules may be as exemplified in table 6:
TABLE 6
Figure BDA0002254099940000241
The additive coefficient charging rules may be as exemplified in table 7:
TABLE 7
Figure BDA0002254099940000242
Figure BDA0002254099940000251
The exempt charging rule defines a rule for charging by using the impulse quota as an accounting method, and can be exemplified as table 8:
TABLE 8
Item Charging mode Calculating the Range Coefficient value of
Activity mitigation Deduction and deduction amount Total sum of money -3000
Integral reduction and avoidance Deduction and deduction amount Total sum of money Point by point value
Advertisement exemption Deduction and deduction amount Total sum of money -1000
Reward exemption Deduction and deduction amount Total sum of money -5000
…… …… …… ……
If a pre-registered user, 5 family members are registered, the family members are VIP member users, 200 photos are accumulated by a shooting device in a scenic spot, 20 short videos participate in the best expression show shooting activities for winning a prize and enjoying 5-fold benefits, a 99-yuan package is selected, 1000 credits (converted into 10 dollars in cash) are used during settlement, the user agrees to implant scenic spot advertisements in a shared electronic music album, and a customized photo tour is given for free in the 99-yuan package, so that the user can bear express delivery fees. Then the actual amount charged by the user would be as exemplified in table 9:
TABLE 9
Figure BDA0002254099940000252
Figure BDA0002254099940000261
If a post-registration user registers 5 family members which are non-member users, 1 item of scenic spot main electronic photo, 5 items of scenic spot hot spot photos and 2 items of special short videos need to be extracted from a shooting device in a scenic spot, and the photo paper photos are required to be printed to manufacture an electronic music photo album, different from the situation that scenic spot advertisements are implanted in the electronic music photo album, 3000 points of winning prize enjoyment of the shooting activity with the best walking posture are obtained by extracting the special short videos, and 1 medium (U disk) is additionally purchased. Then the actual amount charged by the user would be as exemplified in table 10:
watch 10
Figure BDA0002254099940000262
As shown in tables 1 to 10, if the user selects to apply for the media resource information, the charging rule is selected from the charging rule pool in combination with the actual requirement of the user.
In the embodiment of the invention, a charging processing mode for the service of the scene area shooting service is also provided, wherein the device mainly comprises an application information acquisition module, a charging module and a charging result sending module, and the application information acquisition module: receiving user information generated by a client and user application media resource information; a charging module: according to the user information and the media resource information applied by the user, searching a corresponding charging rule in a charging rule pool, generating a charging result and generating a charging result; a charging result sending module: and sending the charging result to an accounting processing system.
In addition, in the embodiment of the present invention, a charging processing method for a scenic region shooting service is further provided, and the device is located in a cloud server, and mainly includes an application information obtaining module, a charging module, and a charging result sending module, where the application information obtaining module: receiving user information generated by client and user application media resource information
A charging module: and searching the corresponding charging rule in the charging rule pool according to the user information and the media resource information applied by the user, generating a charging result and generating the charging result. A charging result sending module: and sending the charging result to an accounting processing system.
Furthermore, in an embodiment of the present invention, a service charging method is further provided, where the service charging method includes: one or more processors, memory, a bus system, a transceiver, and one or more applications, the one or more processors, memory, and transceiver being coupled via the bus system; one or more application programs are stored in the memory, the one or more application programs including instructions which, when executed by the processor of the service charging apparatus, cause the service charging apparatus to perform the service charging method of the above-described method embodiments. For a specific service charging method, reference may be made to the related description in the above method embodiment, and details are not described herein again.
By the technical effect of the embodiment of the invention, the requirements of different levels of users are met to the maximum extent through flexible and various charging rules, and more benefit returns are obtained for service providers.
Fig. 9(a) shows a schematic diagram of a media service system in an embodiment of the present invention, as shown in fig. 9(a), collection ends are respectively disposed in different scenic spots, collection devices of the collection ends are connected to a collection server, the collection server is connected to a local server, the local server is connected to a cloud server, and a photo required by a user is sent to a user terminal, that is, a client, through the cloud server.
Fig. 9(b) is a schematic diagram of an alternative media service system according to an embodiment of the present invention, and as shown in fig. 9(b), the processing includes the acquisition end, the acquisition server, the local server, and the client end shown in fig. 9(a), and further includes a cloud server, where the cloud server may include cloud servers in different areas, for example, the european cloud server and the asian cloud server shown in fig. 9 (b).
The following describes embodiments of the present invention with reference to different scenarios.
Scenario example 1
A family of four parents, mothers, girls and children take international cruise ships for travel, and each person gets a customized ornament at a customer service counter after boarding the cruise ships. The mother receives a series of customized bracelets, wherein 3 bright blocks and 1 dark block in the bracelet form a first identification characteristic essential part of the bracelet, peach blossom patterns on the bright blocks of the second ordered form a second identification characteristic essential part of the bracelet, peach blossom patterns on the dark blocks form a third identification characteristic essential part, and the three identification characteristic essential parts determine the identification characteristics of the bracelet. The method comprises the following steps that a father draws a string of customized bead necklaces, 1 big bead, 1 middle bead and 5 small beads form a first identification characteristic essential element, and the big beads are defined as a white balance white block acquisition area; the middle bead is set to be dark color, and a second identification characteristic element is formed; the second and third ordered beads are set to be light dark, constituting a third identifying characteristic element; three identification feature elements determine the identification features of the ball necklace. The daughter received a custom visor that was set with a repeating arrangement of leaf and flower patterns that constituted the identifying characteristics of the visor. The children receive a customized toy gun, six groups of same identification characteristic information are repeatedly arranged on the left side and the right side of the toy gun, the identification characteristic information comprises a first identification characteristic essential element formed by three five-star shapes and a circular ring, and the circular ring is set to be sorted into a third identification characteristic essential element formed by a third identification characteristic essential element. The ornament can also be used as a billing basis for each consumption activity of the cruise ship, for example, when the tourist consumes the cruise ship, the operation of billing of the computer system can be carried out only by identifying the ornament in front of the camera, and the application scene of cash-free consumption billing under the environment without a wireless network is realized.
A four-mouth customized ornament is pre-registered in a media service counter, customer service personnel respectively collect and identify the four ornaments through a counter client, and a feature recognizer of a media service support system identifies and generates identification features of the ornaments and sends the identification features to an identification feature database to finish the guest registration operation. The service personnel sets four registered ornaments as a group of family group clients at the counter client.
Shooting and collecting equipment serving as a monitoring system of the cruise ship is arranged at each main playground of the cruise ship, and activities of tourists in a public area on the cruise ship are collected and stored. The shooting and collecting device is connected with the service support system server through video data, and the video stream collected by the shooting and collecting device is transmitted to the service support system server in real time to be stored and processed subsequently.
The image index generating device of the service support system server acquires video stream images in real time, extracts identification characteristic information and additional characteristic information in the images, generates identification characteristics and additional characteristics, stores the images and sends the identification characteristics and the additional characteristics of the images to an image index database. The method also comprises the steps of judging whether the image has the identification characteristic information, if so, extracting the image for storage, and if not, not storing the image.
The video index generating device of the service support system server acquires video frames, captures time node information of the video frames, generates time characteristics, captures identification characteristic information in the video frames, generates identification characteristics, sends the time characteristics and the identification characteristics of the video frames to a video index database, and determines the corresponding relation between the time characteristics and the identification characteristics in the index database. Capturing additional characteristic information in the extracted frame image, generating additional characteristics of the extracted frame image, sending the additional characteristics of the video frame to a video index database, and determining the corresponding relation between the time characteristics and the additional characteristics in the index database.
The service support system server is used as a part of the monitoring system, and also has a step of storing the video stream, which is not related to the present invention and is not described in detail.
And after the travel is finished, the father selects the media service content from the service counter and settles the account. The counter customer service collects images of the ball necklaces of the father on the counter customer service machine, identification features are identified, media data related to group information are selected and extracted through a client-side interaction interface, and the business support system extracts four identification features corresponding to the group information, namely the identification features of the four accessories. And the business support system queries the image index database and the video index database according to the four identification characteristics to generate image index data and video index data. And extracting images and videos according to the image index data and the video index data, and presenting the images and videos to the client for selection in the display unit according to the sorting rule. Because the father does not carry mobile storage devices such as a U disk and the like during traveling outside and the mail steamer does not have a mobile network in the sea area, media data cannot be received through the mobile network by using the mobile terminal equipment, 199 yuan packages including free U disks are selected, and the U disks storing the media data in the packages are obtained after cash payment is settled.
Scenario example 2
A company organizes 30 employees and their family members to travel to an outdoor scenic spot A, wherein the organization employees and their family members comprise 15 employees without family members and 15 employees and their family members, and the 15 employees and their family members are a family combination of 5 three families. The person in charge of the company tour activity gets 30 chest cards from the service counter of the scenic spot, and the chest cards are numbered B066-B086. Wherein each number B066-B081 corresponds to 1 chest card and is distributed to the staff without family members; numbers B082-B086 correspond to 3 chest cards for each number, and the 3 chest cards for each number are distributed to the same family member, namely the three members in the same family all use the chest cards with the same number. After the service counter customer service staff creates and delivers the B066-B086 chest cards to the responsible persons of the tourism activities, 20 identification characteristics of B066-B086 are registered in the user information of the group of B5 at the counter client, and the registration work of the tourist group is completed.
The responsible person of the tourism activity gets the sample pattern matched with the skin color from the service counter, each member compares the skin color of the person according to the white balance white block and the sample pattern matched with the skin color, selects the corresponding white block sample block, and inserts the white block sample block into the specific area of the chest card.
This outdoor scenic spot is better promotion visitor service experience, has installed special high definition in the area of gathering the scenery hot spot and has taken a photograph collection system in succession, can realize incessant continuous operation of taking a photograph to through gathering end server automatic identification visitor and wearing the chest card, the image that automatic storage includes the identification feature discards the image that does not include the identification feature. The acquisition terminal server also has the capability of controlling the focusing unit of the continuous shooting acquisition device, and controls the focusing unit to focus and shoot by taking the area where the identification feature is located as a focusing point after the identification feature is acquired. The acquisition end server also has the capability of adjusting the white balance of the image, and after the white block sample area in the chest card is acquired, the white balance of the image is automatically adjusted according to the color temperature of the white block sample area and stored. And the process that the acquisition end server generates the image identification features and the additional features and sends the image identification features and the additional features to the image index database is not repeated. The processes of shooting videos and generating videos in the scenic spot, generating identification features, additional features and the like are not repeated.
In order to provide images and videos with more excellent imaging effects to visitors, the service support system is provided with a post-processing unit which can intelligently replace specific images in the images. For example, the chest card worn by the visitor in the embodiment affects the overall aesthetic sense of the generated image and video, and in order to improve the negative effect of the chest card on the overall imaging effect, the post-processing unit captures the chest card in the image and replaces the chest card. For example, intelligent replacement is carried out according to the clothes of the tourist, and the chest card image is eliminated from the image; for example, a fixed pattern is set for intelligent replacement, and the chest card is replaced by other more beautiful fixed images. (if not included, it is to be noted that the specification is for technical teaching)
After the tour is finished, the responsible person of the tour event goes to the service counter, and logs in the group user information of 'B5' through the client personnel, the customer service terminal presents 20 identification characteristics coded as 'B066-B086' under the group user of 'B5' to the responsible person, and all image data and video data are generated in the tour process of the scenic spot. The person in charge selects a plurality of employee photo-album images and individual wonderful images, and selects a customized photo travel log service, and requests 25 books of XXX company XX Excellent employee tourist souvenir book. The generated fee under the group information is settled by the person in charge.
The employees and family members of the company shoot a lot of wonderful moments in the scenic spot, and each of the employees and family members goes to a service hall to select own media data by using a client self-help counter. The chest cards with the same number as B085 are used by the staff A and the family members, one chest card is placed in front of a self-service counter collecting device, the counter recognizes that the identification feature of the chest card is B085, two login options of 'B5 group' and 'B085 person' are logged in on a counter interactive interface, and after a 'B085 person' user is selected, the display interface displays that all image data and video data are generated in the process of traveling in a scenic spot under all the B085 identification features. Employee A selects the corresponding media data and settles the account according to the package.
Where employee little von (chest code B073) gets a free package at the kiosk to be able to extract media files from an individual user. Staff xiao feng meets mr. hui of another tourist team (chest code B267) and women of the businessman king (chest code B875) in the process of the current tour. The young von and the mr. hui and the wang women are college classmates, three people take a plurality of wonderful photos together in the process of traveling, and the three people want to share the electronic photo album manufactured in the process of traveling in the scenic spot into the social group of the college classmates. Then, after the employee's small von logs in the personal user information again, the user is clicked again in the interactive interface, the chest card of the code B267 is identified through the self-service counter collecting device, the interactive interface presents login options such as the user adding the personal user and the user adding the group, the user is continuously clicked in the interactive interface after the login option of the user adding the personal user is selected, and the code B875 chest card is added into the login interface after the operations are repeated. After the above steps are completed, the interactive interface displays three identification features encoded as B073, B267 and B875 to generate all image data and video data during the travel in the scenic spot. After the images and videos of three people are selected, the electronic photo albums are selected and obtained by the small von, mr. hui and wang women, and the electronic photo albums are generated by the system after settlement is completed and are shared in college classmate social groups in social software.
In the login process, the counter machine of the customer service staff can complete the login operation of the user in a mode of directly inputting the code value, the self-service counter machine does not have the right of directly inputting the code value to perform the login operation, the self-service counter machine can only complete the login operation by acquiring the identification characteristic information, and other malicious operations after other people log in through simple operation are avoided.
Scenario example 3
One major business of photography companies is to photograph wedding photos for a married couple, and to obtain a more excellent image-taking effect, the imaging of scenes taken by the married couple to scenic spots, parks, and the like is irregularly organized.
The outdoor scene shooting places of wedding buildings in a certain city are generally in artificial lake parks, city center parks, amusement parks, XX mountain scenic spots, XX cathedral, magic castle and other scenic spots, and the media service providers install ultra-high-definition shooting and collecting devices in the scenic spots and provide high-definition image collection services of a fee monthly payment type and a yearly payment type for the wedding buildings.
Wedding building a is a monthly subscription of the media facilitator, and is planned to organize newly married couples to visit city parks for viewing in 2019, 7.23.d. The movie building shoots the wedding dresses and the full dresses in the store at multiple angles, uploads the pictures to the business support system of the media service provider, and the business support system automatically generates clothing identification information and stores the information in the clothing identification library. After a studio worker logs in an account number on a website of a service provider by using a computer terminal, a reservation shooting date is selected, a reservation shooting address is selected, and wedding dresses and dresses which need to be used on the reservation date are selected. For example, on the day, wedding dress No. 4 has been used by other buildings at the same location, the system detects that the identification feature library already contains the identification features of the clothing information, and the system prompts through the network interactive interface that the identification reservation request cannot be completed.
23.7.2019, there are 10 couples who come to the city park to take scenes with the same wedding dress and dress recorded into the system. Wedding dress movie building staff has demonstrated several activation superelevation clearly to take a photograph collection system in the scenic spot and has taken a candid photograph the gesture to newly married couple, and collection end server obtains and takes the focus area as focus adjustment focus after taking a candid photograph the gesture, and the fixed time length of delay triggers collection system and takes a candid photograph the operation.
After finishing the shooting operation, the A wedding studio logs in an account through a computer website, extracts all original shot image files, and performs subsequent classification and deep processing.
In the wedding business of the movie building, the method for recording the video in the scenic spot is basically similar to the flow of shooting the image, and the description is omitted.
Scenario example 4:
parents of Mr. Tang retire to travel to a plurality of countries in Europe with the son of Mr. Tang, the Mr. Tang downloads an APP (application) of a service provider by using mobile terminal equipment, registers and logs in an account, and uploads a facial picture of a family to a cloud service support system of the service provider to serve as main feature information for feature identification in each scenic spot. According to the plan of visiting the museum from Monday to China capital, Mr. Tang uses the mobile terminal device to log in the media service provider APP, enters the interaction interface of the museum from China capital, selects the facial photos of parents and children, determines that three people are pre-registered in the museum from Monday capital in China to receive services, the recognition characterizer completes the facial feature recognition of the three people, generates recognition features and stores the recognition features in the recognition feature database of the museum, and completes the pre-registration operation. On Monday night, Mr. Tang uses the mobile terminal device to check the images and videos of the father and son in the museum, and the inquiry cannot be found. Telephone inquiry is known to satisfy the requirement that children first play in the country a, the children's paradise, for one day, without visiting a museum. Mr. Tang enters an interactive interface of a national capital children park through a mobile terminal APP, face photos of parents and children are selected, and three persons are determined to be registered in the children park of the Monday to receive service. After receiving the post-registration instruction, the business support system starts a media data adding program, the recognition characterizer completes the facial feature recognition of three persons to generate recognition features, the child park local server extracts all video data collected on the Monday, reads and recognizes the facial information of tourists one by one from video files, generates image index data and video index data which are matched with the recognition features of the three persons, generates corresponding preview images and video covers and pushes the preview images and the video covers to the APP of the Mr. Tang for display. And after Mr. Tang selects the required images and videos, selecting a temporary storage cloud option, and transferring the selected image files and video files to a cloud server for storage by a local server of the children's park.
After the parents and the son finish traveling, mr. Tang selects required images and videos from the temporary image storage and the temporary video storage, a travel commemorative piece named as European grandfather and grandson is generated by customization, a set of travel log picture album named as European is customized, and payment is finished by using the handheld mobile terminal.
According to another aspect of the embodiment of the present invention, there is also provided a video presentation processing method, and fig. 10 is a flowchart of the video presentation processing method according to the embodiment of the present invention, as shown in fig. 10, the video presentation processing method includes the following steps:
step S1002, acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a preset area, and the at least one acquisition device is triggered by a preset condition to shoot.
Alternatively, the predetermined area here may be a scenic spot. Such as parks, amusement parks, etc. The scenic spot is a place for providing ornamental, study, leisure and entertainment for the public, can be a charged scenic spot, can be a free scenic spot, can be a private scenic spot, can be a public scenic spot, can be a natural landscape, and can be an artificial facility.
Alternatively, the at least one capturing device may be a capturing device disposed in the predetermined area.
It should be noted that, in the embodiment of the present invention, a specific setting position of the at least one collecting apparatus is not specifically limited. The at least one acquisition device may be arranged at an entrance to the scenic spot, illustrated with the predetermined area as the scenic spot.
Step S1004 of identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
Optionally, the person may be identified from the video by performing image identification on a video frame acquired by at least one acquisition device by using an image identification technology to obtain the person in the video.
Optionally, in order to distinguish the persons in the predetermined area, the identification information of the identified persons is unique in the predetermined area.
Step S1006, correspondingly storing the video and the identification information identified from the video.
Optionally, the video and the identification information identified from the video are stored correspondingly, so that the video can be conveniently displayed to the person corresponding to the identification information subsequently.
Step S1008, the identification information of the user is acquired.
Step S1010, searching the corresponding video according to the identification information of the user.
Step S1012, the found video is displayed to the user.
As can be seen from the above, in the embodiment of the present invention, a video may be acquired, a person may be identified from the video, and identification information for identifying the person may be identified from the person, where the identification information is unique within a predetermined area; correspondingly storing the video and the identification information identified from the video; after the identification information of the user is acquired, the corresponding video is searched according to the identification information of the user so as to display the searched video to the user, and the purpose of automatically displaying the video of the user in the preset area to the user is achieved.
It is easy to note that since the person is identified from the acquired video after the video is acquired, and the identification information of the person is identified from the recognition as the information that the person corresponds to the video, the video can be stored in correspondence with the identification information identified from the video in order to subsequently find the video based on the identification information; and then acquiring the identification information of the user, searching the video corresponding to the identification information of the user based on the identification information of the user, and displaying the video to the user, so that the aim of automatically displaying the video of the user in the preset area to the user is fulfilled, and meanwhile, the technical effect of improving the efficiency of displaying the video of the user in the preset area to the user is achieved.
Therefore, the video display processing method provided by the embodiment of the invention solves the technical problem that the mode of photographing the user in the preset area and displaying the obtained video to the user in the related art is low in efficiency.
It should be noted that, in the embodiment of the present invention, the manner of identifying the identification information for identifying the person from the persons, correspondingly storing the video and the identification information identified from the video, obtaining the identification information of the user, searching for the corresponding video according to the identification information of the user, and displaying the found video to the user may be implemented by the foregoing manner, and details are not repeated here.
According to the multiple scene embodiments, the photo display processing method and the video display processing method provided by the embodiments of the invention achieve the purpose of automatically displaying the photo or the video of the user in the predetermined area to the user, achieve the technical effect of improving the efficiency of displaying the photo or the video of the user in the predetermined area to the user, and greatly improve the user experience.
According to another aspect of the embodiment of the present invention, there is also provided a photograph showing and processing apparatus, and fig. 11 is a schematic view of the photograph showing and processing apparatus according to the embodiment of the present invention, as shown in fig. 11, the photograph showing and processing apparatus includes: the device comprises a first acquisition unit 1101, a first identification unit 1102, a first storage unit 1103, a second acquisition unit 1104, a first search unit 1105 and a first presentation unit 1106. The photograph display processing apparatus will be described in detail below.
The device comprises a first acquisition unit 1101 configured to acquire a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to be shot by a predetermined condition.
A first recognition unit 1102 for recognizing a person from the photograph and identifying identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area.
The first saving unit 1103 is configured to save the photo and the identification information recognized from the photo in association with each other.
A second obtaining unit 1104, configured to obtain identification information of the user.
The first searching unit 1105 is configured to search for a corresponding photo according to the identification information of the user.
A first display unit 1106, configured to display the found photo to the user.
It should be noted here that the first acquiring unit 1101, the first identifying unit 1102, the first saving unit 1103, the second acquiring unit 1104, the first searching unit 1105 and the first presenting unit 1106 correspond to steps S102 to S112 in embodiment 1, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the content disclosed in embodiment 1. It should be noted that the modules described above as part of an apparatus may be implemented in a computer system such as a set of computer-executable instructions.
As can be seen from the above, in the above embodiments of the present application, the first obtaining unit may be used to obtain a photo, where the photo is obtained from at least one capturing device distributed in a predetermined area, and the at least one capturing device is triggered by a predetermined condition to capture the photo; then, identifying a person from the photograph by using a first identification unit, and identifying identification information for identifying the person from the person, wherein the identification information is unique in a predetermined area; correspondingly storing the photo and the identification information recognized from the photo by using the first storage unit; acquiring identification information of the user by using a second acquisition unit; searching the corresponding photo according to the identification information of the user by using the first searching unit; and finally, displaying the searched photo to the user by using the first display unit. The photo display processing device provided by the embodiment of the invention realizes the purpose of automatically displaying the photo of the user in the preset area to the user, simultaneously achieves the technical effect of improving the efficiency of displaying the photo of the user in the preset area to the user, and further solves the technical problem of efficiency comparison of modes of photographing the user in the preset area and displaying the obtained photo to the user in the prior art.
In an alternative embodiment, the first identification unit comprises: the first identification module is used for identifying attachments on the person and/or biological characteristics of the person from the person; a first determination module for using the characteristic information of the attached matter and/or the characteristic information of the biometrics characteristic as identification information for identifying the person; the second acquisition unit includes: the second acquisition module is used for acquiring attachments of the user and/or biological characteristics of the user and taking characteristic information corresponding to the biological characteristics as identification information of the user; wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the attachment is used for uniquely identifying the person in the preset area; the biometric characteristic of the person includes one of: facial features, posture features.
In an alternative embodiment, the first lookup unit comprises: the searching module is used for searching the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user after the identification information of the user is obtained; and the second determining module is used for searching the photos of the one or more persons according to the characteristic information of the one or more persons as the photos corresponding to the identification information of the user.
In an alternative embodiment, the first holding unit comprises: the adjusting module is used for adjusting the white balance of the photo according to the white area under the condition that the attachment on the person comprises the white area; and the storage module is used for correspondingly storing the adjusted photo and the identification information recognized from the photo.
In an alternative embodiment, the first obtaining unit includes: the third determining module is used for extracting a preset frame from the video as a picture under the condition that the at least one acquisition device shoots the video under the triggering of the triggering condition; and/or the first display module is used for displaying the searched photos to the user and displaying partial content or all content of the video to the user.
In an alternative embodiment, the predetermined condition is that at least one of the following information is detected in the presence of a person in the predetermined area: gesture information, mouth shape information, body shape information.
In an alternative embodiment, the first display unit comprises: the sorting module is used for sorting part or all of the photos under the condition that the number of the searched photos exceeds the preset number; and the second display module is used for displaying part or all of the sequenced photos to the user.
According to another aspect of the embodiment of the present invention, there is also provided a video presentation processing apparatus, and fig. 12 is a schematic diagram of a video presentation processing apparatus according to an embodiment of the present invention, as shown in fig. 12, the photo presentation processing apparatus includes: a third obtaining unit 1201, a second identifying unit 1202, a second storing unit 1203, a fourth obtaining unit 1204, a second searching unit 1205 and a second displaying unit 1206. The video presentation processing apparatus will be described in detail below.
A third obtaining unit 1201, configured to obtain a video, where the video is obtained from at least one capturing device distributed in a predetermined area, and the at least one capturing device is triggered by a predetermined condition to capture the video.
A second identifying unit 1202 for identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area.
A second saving unit 1203 is configured to correspondingly save the video and the identification information identified from the video.
A fourth obtaining unit 1204, configured to obtain identification information of the user.
A second searching unit 1205 is configured to search for a corresponding video according to the identification information of the user.
And the second display unit 1206 is used for displaying the searched video to the user.
It should be noted here that the third obtaining unit 1201, the second identifying unit 1202, the second saving unit 1203, the fourth obtaining unit 1204, the second searching unit 1205 and the second presenting unit 1206 correspond to steps S1002 to S1012 in embodiment 1, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above as part of an apparatus may be implemented in a computer system such as a set of computer-executable instructions.
As can be seen from the above, in the above embodiments of the present application, a third obtaining unit may be used to obtain a video, where the video is obtained from at least one capturing device distributed in a predetermined area, and the at least one capturing device is triggered to capture by a predetermined condition; then, identifying a person from the video by using a second identification unit, and identifying identification information for identifying the person from the person, wherein the identification information is unique in a predetermined area; correspondingly storing the video and the identification information identified from the video by using a second storage unit; acquiring identification information of the user by using a fourth acquisition unit; searching a corresponding video according to the identification information of the user by using a second searching unit; and finally, displaying the searched video to the user by using the second display unit. The video display processing device provided by the embodiment of the invention realizes the purpose of automatically displaying the video of the user in the preset area to the user, simultaneously achieves the technical effect of improving the efficiency of displaying the video of the user in the preset area to the user, and further solves the technical problem of low efficiency of a mode of photographing the user in the preset area and displaying the obtained video to the user in the related technology.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the photograph presentation processing method or the video presentation processing method of any one of the above.
According to another aspect of the embodiment of the present invention, there is provided a processor, configured to execute a program, where the program executes a photo presentation processing method or a video presentation processing method according to any one of the above methods.
An embodiment according to the present application may also provide an electronic apparatus that may be any one of electronic apparatus terminal devices in an electronic apparatus terminal group. Optionally, in this embodiment, the electronic device terminal may be replaced by a computer or other terminal equipment.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of an electronic apparatus network.
In this embodiment, the electronic device may execute the program code of the following steps in the photograph showing and processing method: acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying a person from the photograph and identifying information from the person identifying the person, wherein the identifying information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; and displaying the searched photos to the user.
Alternatively, fig. 13 is a block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 13, the electronic device 1301 may include: one or more processors 1302 (only one of which is shown), a memory 1303, and a transmission 1304.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the photo presentation processing method and apparatus in the embodiment of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the photo presentation processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the electronic device 1301 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying a person from the photograph and identifying information from the person identifying the person, wherein the identifying information is unique within the predetermined area; correspondingly storing the photo and the identification information recognized from the photo; acquiring identification information of a user; searching a corresponding photo according to the identification information of the user; displaying the searched photo to the user; and/or a memory coupled to the processor for providing instructions to the processor for the following processing steps: acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot; identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within a predetermined area; correspondingly storing the video and the identification information identified from the video; acquiring identification information of a user; searching a corresponding video according to the identification information of the user; and displaying the searched video to the user.
It is easy to note that since, by identifying a person from the acquired photograph or video and identifying identification information of the person from the recognition as information that the person corresponds to the photograph or video after the photograph or video is acquired, the photograph or video can be saved in correspondence with the identification information identified from the photograph or video in order to subsequently find the photograph or video based on the identification information; and then acquiring identification information of the user, searching the corresponding photo or video based on the identification information of the user, and displaying the photo or video to the user, so that the aim of automatically displaying the photo or video of the user in the preset area to the user is fulfilled, and meanwhile, the technical effect of improving the efficiency of displaying the photo or video of the user in the preset area to the user is achieved.
Therefore, the photo or video display processing method provided by the embodiment of the invention solves the technical problem that the efficiency of a mode of photographing a user in a preset area and displaying the obtained photo or video to the user is low in the related art.
It can be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 13 is a diagram illustrating a structure of the electronic device. For example, the electronic device 1301 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 13, or have a different configuration than shown in fig. 13. As shown in fig. 13, it may further include a display, a user interface, different network interfaces, an IEEE 802.11 network interface as shown in fig. 13, an IEEE 802.16 network interface, a 3GPP interface,
those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like, and couplers.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention.

Claims (11)

1. A photo display processing method is characterized by comprising the following steps:
acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition;
identifying a person from the photograph and identifying information from the person identifying the person, wherein the identifying information is unique within the predetermined area;
correspondingly storing the photo and the identification information recognized from the photo;
acquiring identification information of a user;
searching a corresponding photo according to the identification information of the user;
and displaying the searched photo to the user.
2. The method of claim 1,
identifying, from the persons, identification information for identifying the person includes: identifying attachments on the person and/or biometrics of the person from the person; using the characteristic information of the attached matter and/or the characteristic information of the biological characteristic as identification information for identifying the person;
acquiring the identification information of the user comprises: acquiring attachments of the user and/or biological characteristics of the user, and taking characteristic information corresponding to the biological characteristics as identification information of the user;
wherein the attachment comprises at least one of: apparel, accessories, hand held articles; the attachment is used for uniquely identifying the person in the predetermined area; the biometric characteristic of the person comprises one of: facial features, body posture features.
3. The method of claim 1, wherein after obtaining the identification information of the user, finding the corresponding photo according to the identification information of the user comprises:
searching the characteristic information of one or more persons corresponding to the identification information according to the identification information of the user;
and searching the photos of the one or more people according to the characteristic information of the one or more people to be used as the photos corresponding to the identification information of the user.
4. The method of claim 2, wherein, in the case that the attachment on the person includes a white area, correspondingly saving the photograph and the identification information recognized from the photograph comprises:
adjusting the white balance of the photo according to the white area;
and correspondingly storing the adjusted photo and the identification information recognized from the photo.
5. The method of claim 1, wherein, in the event that the at least one capture device captures a video triggered by a trigger condition,
acquiring the photo comprises: extracting a predetermined frame from the video as the photograph; and/or the presence of a gas in the gas,
and displaying the searched photo to the user, and displaying partial content or all content of the video to the user.
6. The method according to claim 1, wherein the predetermined condition is that the presence of at least one of the following information is detected for the person in the predetermined area: gesture information, mouth shape information, body shape information.
7. The method according to any one of claims 1 to 6, wherein presenting the found photo to the user comprises:
under the condition that the number of the searched photos exceeds a preset number, sequencing part or all of the photos;
and displaying part or all of the sequenced photos to the user.
8. A method for processing a video presentation, comprising:
acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition;
identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area;
correspondingly storing the video and the identification information identified from the video;
acquiring identification information of a user;
searching a corresponding video according to the identification information of the user;
and displaying the searched video to the user.
9. A photograph presentation processing apparatus, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring photos, the photos are acquired from at least one acquisition device distributed in a preset area, and the at least one acquisition device is triggered to shoot by preset conditions;
a first recognition unit configured to recognize a person from the photograph and to recognize identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area;
the first storage unit is used for correspondingly storing the photo and the identification information recognized from the photo;
a second obtaining unit, configured to obtain identification information of a user;
the first searching unit is used for searching the corresponding photo according to the identification information of the user;
and the first display unit is used for displaying the searched photos to the user.
10. A video presentation processing apparatus, comprising:
the third acquisition unit is used for acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered by a predetermined condition to shoot;
a second identification unit configured to identify a person from the video and identify identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area;
the second storage unit is used for correspondingly storing the video and the identification information identified from the video;
a fourth obtaining unit, configured to obtain identification information of a user;
the second searching unit is used for searching the corresponding video according to the identification information of the user;
and the second display unit is used for displaying the searched video to the user.
11. An electronic device, comprising:
a processor;
a memory coupled to the processor for providing instructions to the processor for the following processing steps:
acquiring a photo, wherein the photo is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition;
identifying a person from the photograph and identifying information from the person identifying the person, wherein the identifying information is unique within the predetermined area;
correspondingly storing the photo and the identification information recognized from the photo;
acquiring identification information of a user;
searching a corresponding photo according to the identification information of the user;
displaying the searched photo to the user; and/or the presence of a gas in the gas,
the memory is connected with the processor and is also used for providing the processor with instructions of the following processing steps:
acquiring a video, wherein the video is acquired from at least one acquisition device distributed in a predetermined area, and the at least one acquisition device is triggered to shoot by a predetermined condition;
identifying a person from the video and identifying identification information for identifying the person from the person, wherein the identification information is unique within the predetermined area;
correspondingly storing the video and the identification information identified from the video;
acquiring identification information of a user;
searching a corresponding video according to the identification information of the user;
and displaying the searched video to the user.
CN201911045830.5A 2019-10-30 2019-10-30 Photo display processing method and device and video display processing method and device Pending CN112749290A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911045830.5A CN112749290A (en) 2019-10-30 2019-10-30 Photo display processing method and device and video display processing method and device
PCT/CN2020/122485 WO2021083004A1 (en) 2019-10-30 2020-10-21 Photo display processing method and device, and video display processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911045830.5A CN112749290A (en) 2019-10-30 2019-10-30 Photo display processing method and device and video display processing method and device

Publications (1)

Publication Number Publication Date
CN112749290A true CN112749290A (en) 2021-05-04

Family

ID=75641760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911045830.5A Pending CN112749290A (en) 2019-10-30 2019-10-30 Photo display processing method and device and video display processing method and device

Country Status (2)

Country Link
CN (1) CN112749290A (en)
WO (1) WO2021083004A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837114A (en) * 2021-09-27 2021-12-24 浙江力石科技股份有限公司 Method and system for acquiring face video clips in scenic spot
CN115103206A (en) * 2022-06-16 2022-09-23 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI804421B (en) * 2022-08-23 2023-06-01 李玟鴻 Wedding photography service system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN106708994A (en) * 2016-12-16 2017-05-24 维沃移动通信有限公司 Picture selection method and mobile terminal
CN107615298A (en) * 2015-05-25 2018-01-19 彻可麦迪克私人投资有限公司 Face identification method and system
CN108388672A (en) * 2018-03-22 2018-08-10 西安艾润物联网技术服务有限责任公司 Lookup method, device and the computer readable storage medium of video
CN109087157A (en) * 2018-06-08 2018-12-25 成都第二记忆科技有限公司 A kind of video-photographic works sale service system and method and business model
CN109873951A (en) * 2018-06-20 2019-06-11 成都市喜爱科技有限公司 A kind of video capture and method, apparatus, equipment and the medium of broadcasting
CN110033345A (en) * 2019-03-13 2019-07-19 庄庆维 A kind of tourist's video service method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108419027B (en) * 2018-02-28 2021-04-16 深圳春沐源控股有限公司 Intelligent photographing method and server
CN108777764A (en) * 2018-06-27 2018-11-09 合肥草木皆兵环境科技有限公司 A kind of landscape intelligent take pictures uploading system and its method
CN109948423B (en) * 2019-01-18 2020-09-11 特斯联(北京)科技有限公司 Unmanned aerial vehicle travel accompanying service method applying face and posture recognition and unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615298A (en) * 2015-05-25 2018-01-19 彻可麦迪克私人投资有限公司 Face identification method and system
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN106708994A (en) * 2016-12-16 2017-05-24 维沃移动通信有限公司 Picture selection method and mobile terminal
CN108388672A (en) * 2018-03-22 2018-08-10 西安艾润物联网技术服务有限责任公司 Lookup method, device and the computer readable storage medium of video
CN109087157A (en) * 2018-06-08 2018-12-25 成都第二记忆科技有限公司 A kind of video-photographic works sale service system and method and business model
CN109873951A (en) * 2018-06-20 2019-06-11 成都市喜爱科技有限公司 A kind of video capture and method, apparatus, equipment and the medium of broadcasting
CN110033345A (en) * 2019-03-13 2019-07-19 庄庆维 A kind of tourist's video service method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黑瞳: "《佳能摄影宝典》", 31 December 2013 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837114A (en) * 2021-09-27 2021-12-24 浙江力石科技股份有限公司 Method and system for acquiring face video clips in scenic spot
CN115103206A (en) * 2022-06-16 2022-09-23 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium
WO2023241377A1 (en) * 2022-06-16 2023-12-21 北京字跳网络技术有限公司 Video data processing method and device, equipment, system, and storage medium
CN115103206B (en) * 2022-06-16 2024-02-13 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium

Also Published As

Publication number Publication date
WO2021083004A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN111177451B (en) Tourist attraction photo album automatic generation system and method based on face recognition
WO2021083004A1 (en) Photo display processing method and device, and video display processing method and device
CN207817749U (en) A kind of system for making video
CN108419027B (en) Intelligent photographing method and server
JP4347882B2 (en) Distributing specific electronic images to users
CN104168378B (en) A kind of picture group technology and device based on recognition of face
US7035440B2 (en) Image collecting system and method thereof
US8462224B2 (en) Image retrieval
JP4588642B2 (en) Album creating apparatus, album creating method, and program
US20030118216A1 (en) Obtaining person-specific images in a public venue
DE202014011528U1 (en) System for timing and photographing an event
JP5005107B1 (en) Data storage system
CN101681428A (en) Composite person model from image collection
JP2006293986A (en) Album generating apparatus, album generation method and program
CN106649465A (en) Method and device for recommending and acquiring makeup information
JP4423929B2 (en) Image output device, image output method, image output processing program, image distribution server, and image distribution processing program
CN108898450A (en) The method and apparatus for making image
JP2004005534A (en) Image preserving method, retrieving method and system of registered image, image processing method of registered image and program for executing these methods
CN109191229A (en) Augmented reality ornament recommended method and device
CN109472230B (en) Automatic athlete shooting recommendation system and method based on pedestrian detection and Internet
CN107203646A (en) A kind of intelligent social sharing method and device
JP6369074B2 (en) PHOTOGRAPHIC EDITING DEVICE, SERVER, CONTROL PROGRAM, AND RECORDING MEDIUM
CN110209916A (en) A kind of point of interest image recommendation method and device
KR101683232B1 (en) System for processing image and supplying related information by using total information of image
CN106407421A (en) A dress-up matching evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210504