CN113742585A - Content search method, content search device, electronic equipment and computer-readable storage medium - Google Patents

Content search method, content search device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113742585A
CN113742585A CN202111014802.4A CN202111014802A CN113742585A CN 113742585 A CN113742585 A CN 113742585A CN 202111014802 A CN202111014802 A CN 202111014802A CN 113742585 A CN113742585 A CN 113742585A
Authority
CN
China
Prior art keywords
content
action
candidate
user
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111014802.4A
Other languages
Chinese (zh)
Other versions
CN113742585B (en
Inventor
宋杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202111014802.4A priority Critical patent/CN113742585B/en
Publication of CN113742585A publication Critical patent/CN113742585A/en
Application granted granted Critical
Publication of CN113742585B publication Critical patent/CN113742585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a content searching method, a content searching device, electronic equipment and a computer readable storage medium; the embodiment of the invention responds to a content searching instruction, displays the collected real-time action picture of the user, identifies the action characteristic of the user in the real-time action picture, displays the candidate content corresponding to the action characteristic, and then responds to a selection instruction aiming at the candidate content to take the candidate content as the target content to be searched; the method and the device can improve the accuracy of content search.

Description

Content search method, content search device, electronic equipment and computer-readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a content search method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In recent years, with the rapid development of internet technology, more and more contents appear on the internet. Users often need to search for desired content among these huge amounts of content. Existing content search methods often perform an accurate search by name or a specific condition (e.g., region, year, and category). When the user is not aware of the precise search conditions, the user often cannot accurately search the desired content, and therefore, the accuracy of content search is greatly reduced.
Disclosure of Invention
The embodiment of the invention provides a content searching method, a content searching device, electronic equipment and a computer readable storage medium, which can improve the accuracy of content searching.
A content search method, comprising:
responding to a content searching instruction, and displaying the collected real-time action picture of the user;
identifying the action characteristics of the user in the real-time action picture, and displaying candidate contents corresponding to the action characteristics;
and in response to a selection instruction for the candidate content, taking the candidate content as target content needing to be searched.
Accordingly, an embodiment of the present invention provides a content search apparatus, including:
the display unit is used for responding to the content searching instruction and displaying the collected real-time action picture of the user;
the identification unit is used for identifying the action characteristics of the user in the real-time action picture and displaying candidate contents corresponding to the action characteristics;
and the selecting unit is used for responding to a selection instruction aiming at the candidate content and taking the candidate content as the target content needing to be searched.
Optionally, in an embodiment, the identification unit may be specifically configured to extract target video data in a current acquisition period from the real-time action picture; identifying at least one video frame containing actions in the target video data to obtain a video frame set; and performing action feature extraction on the video frames in the video frame set to obtain the action features of the user.
Optionally, in some embodiments, the identification unit may be specifically configured to sort the video frames in the video frame set according to the acquisition time, and screen out a target video frame in the video frame set according to a sorting result; extracting action characteristics of the target video frame to obtain initial action characteristics of the user; and correcting the initial action characteristics based on other video frames except the target video frame in the video frame set to obtain the action characteristics of the user.
Optionally, in some embodiments, the identification unit may be specifically configured to perform motion feature extraction on video frames in the video frame set except for the target video frame to obtain a target motion feature; determining a weighting coefficient corresponding to each video frame in the video frame set according to the sequencing result; and respectively weighting the initial action features and the target action features based on the weighting coefficients, and fusing the weighted action features to obtain the action features of the user.
Optionally, in some embodiments, the selection unit may be specifically configured to extract, from the real-time action picture, video data in a preset number of acquisition cycles after the current acquisition cycle; extracting action characteristics of video frames in the video data to obtain candidate action characteristics; and generating a selection instruction for the candidate content based on the candidate action characteristics.
Optionally, in some embodiments, the selecting unit may be specifically configured to calculate a feature similarity between the candidate motion feature and the motion feature of the user; and if the feature similarity exceeds a preset threshold, generating a selection instruction for the candidate content.
Optionally, in some embodiments, the selection unit may be specifically configured to match the candidate motion feature with a command motion feature corresponding to a preset command; and if the candidate action characteristics are matched with the instruction action characteristics corresponding to the preset selection instruction, generating the selection instruction aiming at the candidate content.
Optionally, in some embodiments, the identification unit may be specifically configured to send the action feature to a content server, so that the content server screens candidate content corresponding to the action feature from a preset content set; and receiving the candidate content returned by the content server and displaying the candidate content.
In addition, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores an application program, and the processor is configured to run the application program in the memory to implement the content search method provided in the embodiment of the present invention.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to perform steps in any content search method provided by the embodiment of the present invention.
The embodiment of the invention responds to a content searching instruction, displays the collected real-time action picture of the user, identifies the action characteristic of the user in the real-time action picture, displays the candidate content corresponding to the action characteristic, and then responds to a selection instruction aiming at the candidate content to take the candidate content as the target content to be searched; according to the scheme, the collected real-time action picture of the user is displayed, the action characteristics are identified in the real-time action picture, the candidate content is searched according to the action characteristics, the user determines the target content to be searched through the selection instruction, and the target content can be searched without accurate searching conditions, so that the content searching accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene of a content search method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a content search method provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a movie search provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a content search apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a content searching method, a content searching device, electronic equipment and a computer-readable storage medium. The content search apparatus may be integrated in an electronic device, and the electronic device may be a server or a terminal.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Network acceleration service (CDN), big data and an artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
For example, referring to fig. 1, taking as an example that the content search apparatus is integrated in the electronic device, after the electronic device displays the collected real-time action picture of the user in response to the content search instruction, the action feature of the user is identified in the real-time action picture, and candidate content corresponding to the action feature is displayed, and then, in response to a selection instruction for the candidate content, the candidate content is taken as the target content to be searched.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment will be described from the perspective of a content search apparatus, which may be specifically integrated in an electronic device, where the electronic device may be a server or a terminal; the terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), a wearable device, a virtual reality device, or other intelligent devices capable of playing content.
As shown in fig. 2, the specific flow of the content search method is as follows:
101. and responding to the content searching instruction, and displaying the collected real-time action picture of the user.
The real-time action picture can be understood as a video picture acquired in real time when a user is doing an action, the picture can be acquired and displayed by action acquisition equipment in the content search device, and the form of the action acquisition equipment can be various, for example, the action acquisition equipment can be a built-in or external camera, a camera or a video camera, and the like.
102. And identifying the action characteristics of the user in the real-time action picture, and displaying candidate contents corresponding to the action characteristics.
Wherein, the action characteristic can indicate characteristic information corresponding to the action of the user, and the action characteristic is used for searching corresponding candidate content. The candidate content may be content to be searched by the user, and the content may be in various forms, for example, video, text, audio, image, and the like.
103. And in response to a selection instruction for the candidate content, taking the candidate content as the target content needing to be searched.
The selection instruction is used to determine that the candidate content is the target content that the user needs to search for, and there are various ways to trigger the selection instruction, for example, the selection instruction may be triggered by the retention time of the action, the transformation of the action, and a designated selection control.
According to the method and the device, after the collected real-time action picture of the user is displayed in response to a content searching instruction, the action characteristics of the user are identified in the real-time action picture, candidate content corresponding to the action characteristics is displayed, and then the candidate content is used as target content needing to be searched in response to a selection instruction aiming at the candidate content; according to the scheme, the collected real-time action picture of the user is displayed, the action characteristics are identified in the real-time action picture, the candidate content is searched according to the action characteristics, the user determines the target content to be searched through the selection instruction, and the target content can be searched without accurate searching conditions, so that the content searching accuracy can be improved.
The method described in the above examples is further illustrated in detail below by way of example.
As shown in fig. 2, the specific flow of the content search method is as follows:
201. and responding to the content searching instruction, and displaying the collected real-time action picture of the user.
The content search instruction may be an instruction for instructing the smart television to collect a real-time action of the user, display a real-time action picture, and execute operations such as subsequent content search, and the content search instruction may be triggered by the user through a certain button of the remote controller, or triggered by clicking a designated control of a user interface displayed by the smart television through the remote controller (or a finger), or triggered by inputting voice.
The manner of displaying the collected real-time action picture of the user may be various, and specifically, the manner may be as follows:
for example, in response to a content search instruction, a real-time action of a user is acquired, a real-time action picture is obtained, and the real-time action picture is displayed.
The method for acquiring the real-time actions of the user can be various, for example, an action search prompt page is displayed, the action search prompt page comprises an acquisition control, and the action acquisition equipment is called to acquire the real-time actions of the user in response to the trigger operation for the acquisition control.
The action search prompting page is used for prompting a user that a camera is required to be used for using an action search function, privacy and safety problems are declared, and the user can trigger the acquisition control after agreeing. The action search prompt page may be in the form of a pop-up window or other page.
Before the action acquisition equipment is called to acquire the real-time action of the user, an action search function introduction page can be displayed, and the action search function introduction page is mainly used for introducing a mode that the function searches contents mainly through acquiring the human body action of the user to the user, and needs the user to simulate action segments needing to be searched, and the description is given by way of example, such as summoning of fire shadow players and the like. After the introduction information is displayed, the acquisition control can be triggered to call the action acquisition equipment to acquire the real-time action of the user.
202. And identifying the action characteristics of the user in the real-time action picture, and displaying candidate contents corresponding to the action characteristics.
The method for identifying the action features of the user may be various, and specifically may be as follows:
for example, target video data in a current acquisition period is extracted from a real-time action picture, at least one video frame containing an action is identified from the target video data to obtain a video frame set, and action feature extraction is performed on the video frames in the video frame set to obtain action features of a user.
For example, the action of calling the action acquisition device to start acquiring the action of the user can be used as the starting time, and based on the starting time, the action picture in the current acquisition period is extracted from the real-time action picture, so that the target video data in the current acquisition period can be obtained.
After the target video data is extracted, at least one video frame containing the motion can be identified, and various ways of identifying the video frame containing the motion in the target video data can be provided, for example, extracting the video frame in the target video data, performing motion identification on each video frame, and combining the identified video frames containing the motion to obtain a video frame set. The motion recognition may include limb motion recognition, facial motion recognition, and the like, and the specific recognition manner may be various, for example, key point detection is performed in a video frame, and whether a key point detection includes a face of the user or other parts of the human body (for example, a hand, a leg, and the like) is performed, and when at least one key point exists, it may be determined that the video frame includes a motion, or alternatively, a contour detection may be performed on the video frame, and whether the key point detection includes a human body contour of the user is performed, and if the key point detection includes a human body contour, it may be determined that the video frame includes a motion.
After the video frames containing the motion are identified, motion feature extraction can be performed on the video frames in the video frame set to obtain motion features of the user, the feature extraction modes can be various, for example, the video frames in the video frame set can be sorted according to the acquisition time, the target video frames are screened out from the video frame set according to the sorting result, the features of the target video frames are extracted to obtain initial motion features of the user, and the initial motion features are corrected based on other video frames except the target video frames in the video frame set to obtain the motion features of the user.
For example, according to the sorting result, the earliest captured video frame in the video frame set is screened out, and the video frame is used as the target video frame.
For example, motion feature extraction may be performed on other video frames in the video frame set except for the target video frame to obtain the target motion feature, a weighting coefficient corresponding to each video frame in the video frame set is determined according to the sorting result, the initial motion feature and the target motion feature are weighted respectively based on the weighting coefficients, and the weighted motion features are fused to obtain the motion feature of the user.
It should be noted that, the correction of the initial motion characteristic may be understood as adjusting the initial motion characteristic through a target motion characteristic in another video frame acquired after the target video frame, where the adjustment is mainly because a user may adjust his/her motion through a displayed real-time motion picture, and therefore, in one acquisition cycle, the more later acquired motion characteristic is more accurate relative to the initial motion characteristic, so that the weighting coefficient of each motion characteristic may be determined according to the acquisition time, and the later acquired weighting coefficient is larger, thereby completing the process of correcting the initial motion characteristic.
After the action features of the user are obtained, candidate contents corresponding to the action features may be displayed in various ways, for example, the action features may be sent to a content server, so that the content server screens out the candidate contents corresponding to the action features from a preset content set, receives the candidate contents returned by the content server, and displays the candidate contents.
For example, similarity between the action feature and a preset action feature corresponding to each content in a preset content set is respectively calculated to obtain feature similarity, and a content with the maximum feature similarity is screened from the preset content set as a candidate content, or a content with the feature similarity exceeding a preset similarity threshold is screened from the preset content set as a candidate content.
203. And in response to a selection instruction for the candidate content, taking the candidate content as the target content needing to be searched.
For example, if there is a selection instruction for the candidate content, the candidate content is made the target content that needs to be searched in response to the selection instruction for the candidate content. And if the selection instruction aiming at the candidate content does not exist, displaying the candidate content corresponding to the action characteristic of the next acquisition cycle until the selection instruction aiming at the candidate content exists, and taking the candidate content corresponding to the selection instruction as the target content to be searched.
For example, video data in a preset number of acquisition cycles after the current acquisition cycle are extracted from a real-time action picture, action feature extraction is performed on video frames in the video data to obtain candidate action features, and the selection instruction for the candidate content is generated based on the candidate action.
For example, feature similarity between a candidate action feature and an action feature of a user may be calculated, and if the feature similarity exceeds a preset threshold, a selection instruction for candidate content may be generated, where it should be noted that the core of generating a selection instruction in this way is to determine whether the user keeps the action unchanged for a certain time, and when the user keeps the action unchanged for a certain time, it may be determined that the candidate content displayed at this time is the target content to be searched, or the candidate action feature may be matched with an instruction action feature corresponding to a preset instruction, and if the candidate action feature is matched with an instruction action feature corresponding to a preset selection instruction, a selection instruction for the candidate content may be generated, or a content selection page may also be displayed, the content selection page comprises a selection control, and a selection instruction for the candidate content is generated in response to the triggering operation for the selection control.
Optionally, if the number of the candidate contents is multiple, the candidate contents corresponding to the selection instruction may be screened from the candidate contents to obtain the target content, and the screening manner may be multiple, for example, a selection parameter may be identified in the selection instruction, and the target content may be screened from the candidate contents based on the selection parameter.
Optionally, after the target content to be searched is obtained, the target content may be played in response to a play instruction of the user for the target content.
The content is taken as a film, the content searching device is integrated on a television terminal as an example, the process of the content searching mode can be used for opening television searching for a user, the function of 'AI human body work searching' is clicked on a searching and displaying interface, the television executes 'AI human body work searching', a popup window is firstly used for reminding the user that the function needs to use a camera to declare privacy and safety problems, after the user clicks an agreement, the interface displays introduction of the function and informs the user that the 'AI human body work searching' is an AI mode for searching the film through collected human body actions, the user needs to imitate action segments needing searching, and examples such as summoning of fire and shadow players and the like are illustrated. After the user clicks to know, the interface displays the picture collected by the camera, and displays the AI search result (the segment of the candidate film) on the side, the user starts to perform and adjusts the action of the user through the display picture of the television, and the searched film segment is displayed on the television. The television acquires collected pictures to perform AI processing, wherein the AI processing can be understood as extracting action characteristics, the action characteristics extracted in each collection period are sent to a content server, the content server compares the AI data (preset action characteristics) of all the films and returns film data with higher similarity, the television displays searched film results on an AI processing interface according to the data returned by the server, a user finds that the film fragments displayed on the side have own required films in the performance process, the user determines and searches the films through action or clicking and displays the video data corresponding to the films, when the films do not have the own required films, the candidate films are deleted and the candidate films corresponding to new action characteristics are acquired again until the target films required by the user are searched, as shown in fig. 3.
As can be seen from the above, in the embodiment of the application, after the collected real-time action picture of the user is displayed in response to the content search instruction, the action feature of the user is identified in the real-time action picture, the candidate content corresponding to the action feature is displayed, and then, in response to the selection instruction for the candidate content, the candidate content is used as the target content to be searched; according to the scheme, the collected real-time action picture of the user is displayed, the action characteristics are identified in the real-time action picture, the candidate content is searched according to the action characteristics, the user determines the target content to be searched through the selection instruction, and the target content can be searched without accurate searching conditions, so that the content searching accuracy can be improved.
In order to better implement the above method, the embodiment of the present invention further provides a content search apparatus, which may be integrated in an electronic device, such as a server or a terminal, and the terminal may include a tablet computer, a notebook computer, and/or a personal computer.
For example, as shown in fig. 4, the content search apparatus may include a display unit 301, an identification unit 302, and a selection unit 303 as follows:
(1) a display unit 301;
the display unit 301 is configured to display the collected real-time action picture of the user in response to the content search instruction.
For example, the display unit 301 may be specifically configured to collect a real-time action of the user in response to the content search instruction, obtain a real-time action picture, and display the real-time action picture.
(2) An identification unit 302;
the identifying unit 302 is configured to identify an action feature of the user in the real-time action picture, and display candidate content corresponding to the action feature.
For example, the identifying unit 302 may be specifically configured to extract target video data in a current acquisition period from a real-time action picture, identify at least one video frame including an action from the target video data, obtain a video frame set, and perform action feature extraction on the video frames in the video frame set to obtain an action feature of a user. And displaying candidate contents corresponding to the action characteristics.
(3) A selection unit 303;
a selecting unit 303, configured to, in response to a selection instruction for the candidate content, take the candidate content as a target content that needs to be searched.
For example, the selecting unit 303 may be specifically configured to, if there is a selection instruction for the candidate content, take the candidate content as the target content to be searched in response to the selection instruction for the candidate content. And if the selection instruction aiming at the candidate content does not exist, displaying the candidate content corresponding to the action characteristic of the next acquisition cycle until the selection instruction aiming at the candidate content exists, and taking the candidate content corresponding to the selection instruction as the target content to be searched.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in this embodiment, after the display unit 301 displays the collected real-time action picture of the user in response to the content search instruction, the recognition unit 302 recognizes the action feature of the user in the real-time action picture and displays the candidate content corresponding to the action feature, and then the selection unit 303 takes the candidate content as the target content to be searched in response to the selection instruction for the candidate content; according to the scheme, the collected real-time action picture of the user is displayed, the action characteristics are identified in the real-time action picture, the candidate content is searched according to the action characteristics, the user determines the target content to be searched through the selection instruction, and the target content can be searched without accurate searching conditions, so that the content searching accuracy can be improved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, which shows a schematic structural diagram of the electronic device according to the embodiment of the present invention, specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are realized through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 404, and the input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
and responding to a content searching instruction, displaying the collected real-time action picture of the user, identifying the action characteristics of the user in the real-time action picture, displaying candidate content corresponding to the action characteristics, and responding to a selection instruction aiming at the candidate content to take the candidate content as target content needing to be searched.
For example, after responding to a content search instruction, the electronic device collects a real-time action of the user, obtains a real-time action picture, and displays the real-time action picture. Extracting target video data in a current acquisition period from a real-time action picture, identifying at least one video frame containing an action from the target video data to obtain a video frame set, and extracting action characteristics of the video frames in the video frame set to obtain action characteristics of a user. And displaying candidate contents corresponding to the action characteristics. And if the selection instruction for the candidate content exists, the candidate content is used as the target content needing to be searched in response to the selection instruction for the candidate content. And if the selection instruction aiming at the candidate content does not exist, displaying the candidate content corresponding to the action characteristic of the next acquisition cycle until the selection instruction aiming at the candidate content exists, and taking the candidate content corresponding to the selection instruction as the target content to be searched.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, in the embodiment of the present invention, after the collected real-time action picture of the user is displayed in response to the content search instruction, the action feature of the user is identified in the real-time action picture, and the candidate content corresponding to the action feature is displayed, and then, in response to the selection instruction for the candidate content, the candidate content is used as the target content to be searched; according to the scheme, the collected real-time action picture of the user is displayed, the action characteristics are identified in the real-time action picture, the candidate content is searched according to the action characteristics, the user determines the target content to be searched through the selection instruction, and the target content can be searched without accurate searching conditions, so that the content searching accuracy can be improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the embodiment of the present invention provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the content search methods provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
and responding to a content searching instruction, displaying the collected real-time action picture of the user, identifying the action characteristics of the user in the real-time action picture, displaying candidate content corresponding to the action characteristics, and responding to a selection instruction aiming at the candidate content to take the candidate content as target content needing to be searched.
For example, in response to a content search instruction, a real-time action of a user is captured, a real-time action screen is obtained, and then the real-time action screen is displayed. Extracting target video data in a current acquisition period from a real-time action picture, identifying at least one video frame containing an action from the target video data to obtain a video frame set, and extracting action characteristics of the video frames in the video frame set to obtain action characteristics of a user. And displaying candidate contents corresponding to the action characteristics. And if the selection instruction for the candidate content exists, the candidate content is used as the target content needing to be searched in response to the selection instruction for the candidate content. And if the selection instruction aiming at the candidate content does not exist, displaying the candidate content corresponding to the action characteristic of the next acquisition cycle until the selection instruction aiming at the candidate content exists, and taking the candidate content corresponding to the selection instruction as the target content to be searched.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in any content searching method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any content searching method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
According to an aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the electronic device to perform the method provided in the various alternative implementations of the content search aspect or the movie search aspect described above.
The content search method, the content search device, the electronic device, and the computer-readable storage medium according to the embodiments of the present invention are described in detail, and a specific example is applied to illustrate the principles and embodiments of the present invention, and the description of the embodiments is only used to help understanding the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (11)

1. A method of searching for content, comprising:
responding to a content searching instruction, and displaying the collected real-time action picture of the user;
identifying the action characteristics of the user in the real-time action picture, and displaying candidate contents corresponding to the action characteristics;
and in response to a selection instruction for the candidate content, taking the candidate content as target content needing to be searched.
2. The method according to claim 1, wherein the identifying the motion feature of the user in the real-time motion picture comprises:
extracting target video data in the current acquisition period from the real-time action picture;
identifying at least one video frame containing actions in the target video data to obtain a video frame set;
and performing action feature extraction on the video frames in the video frame set to obtain the action features of the user.
3. The content search method according to claim 2, wherein the performing motion feature extraction on the video frames in the video frame set to obtain the motion features of the user comprises:
sequencing the video frames in the video frame set according to the acquisition time, and screening out a target video frame in the video frame set according to a sequencing result;
extracting action characteristics of the target video frame to obtain initial action characteristics of the user;
and correcting the initial action characteristics based on other video frames except the target video frame in the video frame set to obtain the action characteristics of the user.
4. The content searching method according to claim 3, wherein the modifying the initial motion characteristic based on the other video frames in the video frame set except the target video frame to obtain the motion characteristic of the user comprises:
extracting action features of other video frames except the target video frame in the video frame set to obtain target action features;
determining a weighting coefficient corresponding to each video frame in the video frame set according to the sequencing result;
and respectively weighting the initial action features and the target action features based on the weighting coefficients, and fusing the weighted action features to obtain the action features of the user.
5. The content search method according to claim 2, wherein the step of, before the step of regarding the candidate content as the target content to be searched in response to the selection instruction for the candidate content, further comprises:
extracting video data in a preset number of acquisition periods after the current acquisition period from the real-time action picture;
extracting action characteristics of video frames in the video data to obtain candidate action characteristics;
and generating a selection instruction for the candidate content based on the candidate action characteristics.
6. The content search method according to claim 5, wherein the generating a selection instruction for the candidate content based on the candidate action feature comprises:
calculating feature similarity between the candidate motion features and the motion features of the user;
and if the feature similarity exceeds a preset threshold, generating a selection instruction for the candidate content.
7. The content search method according to claim 5, wherein the generating a selection instruction for the candidate content based on the candidate action feature comprises:
matching the candidate action characteristics with instruction action characteristics corresponding to a preset instruction;
and if the candidate action characteristics are matched with the instruction action characteristics corresponding to the preset selection instruction, generating the selection instruction aiming at the candidate content.
8. The content searching method according to claim 1, wherein the displaying at least one candidate content corresponding to the action feature comprises:
sending the action characteristics to a content server so that the content server can screen candidate contents corresponding to the action characteristics from a preset content set;
and receiving the candidate content returned by the content server and displaying the candidate content.
9. A content search apparatus, comprising:
the display unit is used for responding to the content searching instruction and displaying the collected real-time action picture of the user;
the identification unit is used for identifying the action characteristics of the user in the real-time action picture and displaying candidate contents corresponding to the action characteristics;
and the selecting unit is used for responding to a selection instruction aiming at the candidate content and taking the candidate content as the target content needing to be searched.
10. An electronic device, comprising a processor and a memory, wherein the memory stores an application program, and the processor is configured to run the application program in the memory to perform the steps of the content search method according to any one of claims 1 to 8.
11. A computer-readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the content search method according to any one of claims 1 to 8.
CN202111014802.4A 2021-08-31 2021-08-31 Content searching method, device, electronic equipment and computer readable storage medium Active CN113742585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111014802.4A CN113742585B (en) 2021-08-31 2021-08-31 Content searching method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111014802.4A CN113742585B (en) 2021-08-31 2021-08-31 Content searching method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113742585A true CN113742585A (en) 2021-12-03
CN113742585B CN113742585B (en) 2024-07-09

Family

ID=78734407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111014802.4A Active CN113742585B (en) 2021-08-31 2021-08-31 Content searching method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113742585B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140188894A1 (en) * 2012-12-27 2014-07-03 Google Inc. Touch to search
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
WO2017114388A1 (en) * 2015-12-30 2017-07-06 腾讯科技(深圳)有限公司 Video search method and device
CN107577722A (en) * 2017-08-18 2018-01-12 北京金山安全软件有限公司 Menu display method and device, electronic equipment and storage medium
CN107645655A (en) * 2016-07-21 2018-01-30 迪士尼企业公司 The system and method for making it perform in video using the performance data associated with people
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
CN110709835A (en) * 2017-12-12 2020-01-17 谷歌有限责任公司 Providing video previews of search results
US10579507B1 (en) * 2006-08-14 2020-03-03 Akamai Technologies, Inc. Device cloud provisioning for functional testing of mobile applications
CN110955800A (en) * 2018-09-26 2020-04-03 传线网络科技(上海)有限公司 Video retrieval method and device
US10685000B1 (en) * 2019-07-22 2020-06-16 Capital One Services, Llc System and method for preparing a data set for searching
CN111489378A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Video frame feature extraction method and device, computer equipment and storage medium
CN112399262A (en) * 2020-10-30 2021-02-23 深圳Tcl新技术有限公司 Video searching method, television and storage medium
CN112667852A (en) * 2020-12-29 2021-04-16 北京达佳互联信息技术有限公司 Video-based searching method and device, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10579507B1 (en) * 2006-08-14 2020-03-03 Akamai Technologies, Inc. Device cloud provisioning for functional testing of mobile applications
US20140188894A1 (en) * 2012-12-27 2014-07-03 Google Inc. Touch to search
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
WO2017114388A1 (en) * 2015-12-30 2017-07-06 腾讯科技(深圳)有限公司 Video search method and device
CN107645655A (en) * 2016-07-21 2018-01-30 迪士尼企业公司 The system and method for making it perform in video using the performance data associated with people
CN107577722A (en) * 2017-08-18 2018-01-12 北京金山安全软件有限公司 Menu display method and device, electronic equipment and storage medium
CN110709835A (en) * 2017-12-12 2020-01-17 谷歌有限责任公司 Providing video previews of search results
CN110955800A (en) * 2018-09-26 2020-04-03 传线网络科技(上海)有限公司 Video retrieval method and device
CN109918989A (en) * 2019-01-08 2019-06-21 平安科技(深圳)有限公司 The recognition methods of personage's behavior type, device, medium and equipment in monitored picture
US10685000B1 (en) * 2019-07-22 2020-06-16 Capital One Services, Llc System and method for preparing a data set for searching
CN111489378A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Video frame feature extraction method and device, computer equipment and storage medium
CN112399262A (en) * 2020-10-30 2021-02-23 深圳Tcl新技术有限公司 Video searching method, television and storage medium
CN112667852A (en) * 2020-12-29 2021-04-16 北京达佳互联信息技术有限公司 Video-based searching method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113742585B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN110519636B (en) Voice information playing method and device, computer equipment and storage medium
CN113422977B (en) Live broadcast method and device, computer equipment and storage medium
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
CN111241340B (en) Video tag determining method, device, terminal and storage medium
TW201003539A (en) Method, apparatus and computer program product for providing gesture analysis
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN113779381B (en) Resource recommendation method, device, electronic equipment and storage medium
CN111491123A (en) Video background processing method and device and electronic equipment
CN109286848B (en) Terminal video information interaction method and device and storage medium
CN113190695B (en) Multimedia data searching method and device, computer equipment and medium
CN112052784A (en) Article searching method, device, equipment and computer readable storage medium
CN111797850A (en) Video classification method and device, storage medium and electronic equipment
CN112507833A (en) Face recognition and model training method, device, equipment and storage medium
CN112399239A (en) Video playing method and device
CN114257824A (en) Live broadcast display method and device, storage medium and computer equipment
CN114513694A (en) Scoring determination method and device, electronic equipment and storage medium
CN113742585B (en) Content searching method, device, electronic equipment and computer readable storage medium
CN113840177B (en) Live interaction method and device, storage medium and electronic equipment
CN113139093B (en) Video searching method and device, computer equipment and medium
CN115209233A (en) Video playing method and related device and equipment
CN112165626B (en) Image processing method, resource acquisition method, related equipment and medium
CN114143429A (en) Image shooting method, image shooting device, electronic equipment and computer readable storage medium
US20240249421A1 (en) Generation system and generation method for metadata for movement estimation
CN116433939B (en) Sample image generation method, training method, recognition method and device
CN112784238B (en) Data processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant