CN110392310B - Display method of video identification information and related equipment - Google Patents

Display method of video identification information and related equipment Download PDF

Info

Publication number
CN110392310B
CN110392310B CN201910691301.6A CN201910691301A CN110392310B CN 110392310 B CN110392310 B CN 110392310B CN 201910691301 A CN201910691301 A CN 201910691301A CN 110392310 B CN110392310 B CN 110392310B
Authority
CN
China
Prior art keywords
identification
video
identified object
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910691301.6A
Other languages
Chinese (zh)
Other versions
CN110392310A (en
Inventor
汪颖枭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201910691301.6A priority Critical patent/CN110392310B/en
Publication of CN110392310A publication Critical patent/CN110392310A/en
Application granted granted Critical
Publication of CN110392310B publication Critical patent/CN110392310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The display method of the video identification information comprises the following steps: acquiring an identification information list corresponding to a video, wherein the identification information list comprises identification information of at least one identified object, and the identification information of each identified object comprises at least one identification moment and identification range information of the identified object at each identification moment; acquiring playing time according to the playing progress of the video; and when the playing time is equal to the target identification time, acquiring the identification range information of the target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object. Therefore, the recognized object and the recognition range of the recognized object can be displayed simultaneously, and the recognition range can dynamically prompt the position and the size of the recognized object so as to meet the recognition requirement of a user. The application also provides a display device capable of realizing the method.

Description

Display method of video identification information and related equipment
Technical Field
The present application relates to the multimedia field, and in particular, to a method for displaying video identification information and a related device.
Background
With the development of electronic technology, many users prefer to watch videos on electronic devices. The user has a need to identify the person or item, etc. appearing in the video.
The prior art provides methods for identifying people or items in a video. Identification information can be continuously generated in the video playing process, and how to meet the requirement of watching the identification information of a user and simultaneously not influence the user to watch the video becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above, the present application provides a method and a device for displaying video identification information, which can simultaneously display an identified object and an identification range of the identified object, and the identification range can present the position and size of the identified object to a user, so as to meet the identification requirement of the user.
A first aspect provides a method for displaying video identification information, including:
acquiring an identification information list corresponding to a video, wherein the identification information list comprises identification information of at least one identified object, the identification information of each identified object comprises at least one identification time and identification range information of the identified object at each identification time, and the identification time represents the time when each identified object is identified in the video; acquiring playing time according to the playing progress of the video; and when the playing time is equal to the target identification time, acquiring the identification range information of the target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object. The target recognition time is any one of recognition times in the recognition information list, and the target recognized object is an object recognized from the video at the target recognition time.
In a possible implementation manner, the identification information list further includes feature information of the identified object; the method further comprises the following steps: and when the playing time is equal to the target identification time, acquiring N pieces of characteristic information of the target identified object from the identification information list according to the target identification time, and displaying the N pieces of characteristic information of the target identified object, wherein N is a positive integer.
In another possible implementation, the target recognized object includes a first recognized object and a second recognized object;
the displaying the N pieces of feature information of the first identified object comprises: a first characteristic information area and a second characteristic information area which are arranged in parallel are arranged at the edge of a video picture, N pieces of characteristic information of the first identified object are displayed in the first characteristic information area, and N pieces of characteristic information of the second identified object are displayed in the second characteristic information area.
In another possible implementation manner, the method further includes: setting a background color of the first characteristic information region and a border line color of an identification range of the first identified object as a first color;
setting a color of a boundary line between a background color of the second feature information area and an identification range of the second identified object as a second color, the first color being different from the second color.
In another possible implementation manner, the method further includes: determining a first separation line and a second separation line which vertically intersect at a preset separation point; determining a left region and a right region separated by the first separation line in a video picture of the video; determining a first video region and a third video region separated by the second dividing line in the left region, and determining a second video region and a fourth video region separated by the second dividing line in the right region;
the displaying the N pieces of feature information of the target recognized object comprises: determining a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
In another possible implementation manner, the method further includes: pausing the playing of the video according to a pause instruction; if the paused video picture comprises a plurality of identified objects, calculating the distance between the positioning coordinate of the identification range corresponding to each identified object and a preset coordinate, determining a first target identified object according to the calculated minimum distance, and displaying M pieces of characteristic information of the first target identified object on the paused video picture, wherein M is a positive integer larger than N.
In another possible implementation manner, the method further includes: and sequentially displaying the characteristic information of all the identified objects displayed before the preset time from the preset time.
A second aspect provides a display device comprising:
the processing module is used for acquiring an identification information list corresponding to a video, wherein the identification information list comprises identification information of at least one identified object, the identification information of each identified object comprises at least one identification moment and identification range information of the identified object at each identification moment, and the identification moment represents the moment when each identified object is identified in the video;
the processing module is further used for acquiring playing time according to the playing progress of the video;
and the display module is used for acquiring the identification range information of the target identified object from the identification information list according to the target identification time when the playing time is equal to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object. The target recognition time is any one of recognition times in the recognition information list, and the target recognized object is an object recognized from the video at the target recognition time.
In one possible implementation form of the method,
the display module is further configured to, when the playing time is equal to the target identification time under the condition that the identification information list further includes feature information of an identified object, acquire N pieces of feature information of the target identified object from the identification information list according to the target identification time, and display the N pieces of feature information of the target identified object, where N is a positive integer.
In another possible implementation manner, the display module is specifically configured to set a first feature information area and a second feature information area, which are arranged in parallel, at an edge of a video picture, display N pieces of feature information of the first identified object in the first feature information area, and display N pieces of feature information of the second identified object in the second feature information area.
In another possible implementation manner, the display module is further configured to set a background color of the first characteristic information area and a color of a boundary line of the recognition range of the first recognized object as a first color; setting a color of a boundary line between a background color of the second feature information area and an identification range of the second identified object as a second color, the first color being different from the second color.
In another possible implementation manner, the processing module is further configured to determine a first separation line and a second separation line that perpendicularly intersect at a preset separation point; determining a left region and a right region separated by the first separation line in a video picture of the video; determining a first video region and a third video region separated by the second dividing line in the left region, and determining a second video region and a fourth video region separated by the second dividing line in the right region;
the display module is specifically used for determining a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
In another possible implementation manner, the processing module is further configured to pause playing the video according to a pause instruction;
the display module is further configured to calculate a distance between a positioning coordinate of an identification range corresponding to each identified object and a preset coordinate if the paused video picture includes a plurality of identified objects, determine a first target identified object according to the calculated minimum distance, and display M pieces of feature information of the first target identified object on the paused video picture, where M is a positive integer greater than N.
In another possible implementation manner, the display module is further configured to sequentially display, from a preset time, feature information of all identified objects that have been displayed before the preset time.
A third aspect provides a computer storage medium comprising instructions which, when executed on a computer, perform a method as set forth in the first aspect or various possible implementations of the first aspect.
The method comprises the steps of acquiring an identification information list corresponding to a video, wherein the identification information list comprises at least one identification moment and identification range information of an identified object at each identification moment; acquiring playing time according to the playing progress of the video; and when the playing time is equal to the target identification time, acquiring the identification range information of the target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object. Therefore, the recognized object and the recognition range of the recognized object can be displayed simultaneously, the recognition range can prompt the position and the size of the recognized object, and the experience of watching videos of a user is hardly influenced, so that the user requirements are met.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for displaying video identification information according to an embodiment of the present disclosure;
FIG. 2A is a schematic diagram of a region displaying identification ranges and characteristic information in the present application;
FIG. 2B is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 2C is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 2D is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 3A is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 3B is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 3C is another schematic illustration of the present application showing the identification range and the characteristic information area;
FIG. 3D is another schematic diagram of the present application showing the identification range and the characteristic information area;
FIG. 4 is a schematic structural diagram of a display device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal in an embodiment of the present application.
Detailed Description
The application provides a display method and a display device of video identification information, which can simultaneously display an identified object and an identification range of the identified object, wherein the identification range can dynamically prompt the position and the size of the identified object, so that the identification requirement of a user is met.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an embodiment of a method for displaying video identification information provided by the present application includes:
step 101, acquiring an identification information list corresponding to a video.
In this embodiment, the video is identified, and the identification information of the identified object can be acquired from the video. The identification information list includes identification information of at least one identified object, and the identification information of each identified object includes at least one identification time and identification range information of the identified object at each identification time. In addition to the identification time and identification range information, the identification information of the identified object may include, but is not limited to: identification information numbers, characteristic information of the identified objects, and the like.
The identified object includes at least one of a living being, an item, or a scene in the video. In particular, the identified object comprises a scene. Alternatively, the identified object comprises a scene and at least one of the following information: a living being or an article. The living being may be a human or an animal.
The recognition time is a time at which the recognized object is recognized in the video.
The identification range information includes, but is not limited to, the location coordinates, height, and width of the identification range. The positioning coordinates are coordinates of any one point selected from the recognized object, and can be used as both the positioning coordinates of the recognized object and the positioning coordinates of the recognition range. For example, the recognized object is a human nail, and a point is selected from a human body image (such as hair, head, shoulder, and the like) as the positioning coordinate. Alternatively, the recognized object is an object, and one point is selected from the object image as the positioning coordinate.
The recognition range of the recognized object may include the whole or part of the recognized object. For example, when the recognized object is a human, the recognition range of the recognized object may be a region including the head of the human or a region including the whole body of the human. The shape of the recognition range may be a symmetrical pattern such as a square, a rectangle, a circle, an ellipse, or a diamond, or may be an asymmetrical pattern.
A feature refers to a characteristic of an object that is different from other objects. The feature information is characters or letters for describing features. Specifically, the feature information may be a classification identifier, a name, a gender, a nationality, a occupation, or a region, or may be a combination of two or more feature information. In addition to the above examples, the feature information of the present application may be set according to other attributes of the identified object, and is not limited herein. The class identification of the identified object may refer to a person, an item, or a scene. For example, when the identified object is a man, the feature information of the identified object may be a person, a man, a long hair, a person name, or the like. When the identified object is a blue can, the characteristic information of the identified object may be a beverage, blue, a can, etc. When the recognized object is a third sea beach, the characteristic information of the recognized object may be blue, sea, beach, third beach, etc.
And 102, acquiring the playing time according to the playing progress of the video.
And when the video is played, acquiring the playing time according to the playing progress of the video.
And 103, when the playing time is equal to the target identification time, acquiring the identification range information of the target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object. The target recognition time is any one of the recognition times in the recognition information list, and the target object is an object recognized from the video at the target recognition time.
When the playing time is equal to the target recognition time, the video display device indicates that the recognized object is being displayed on the video picture at the playing time. And acquiring the identification range information of the target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object.
Optionally, a boundary line of the identification range of the target identified object is displayed according to the identification range information of the target identified object, and the user may determine the identification range according to the boundary line of the identification range. The boundary line of the recognition range may be a solid line or a dotted line. For example, in the drama screen at the 1 st minute, the first recognized objects are the person a and the article a, and the recognition range boundary line of the person a and the recognition range boundary line of the article a are displayed at the 1 st minute.
When the acquired identification range information includes the positioning coordinates of the rectangle (i.e., the vertex coordinates of the rectangle), the height of the rectangle, and the width of the rectangle, the identification range of the rectangle is displayed based on the identification range information. When the acquired identification range information includes the positioning coordinates (i.e., the coordinates of the center of the circle) of the circle and the radius of the circle, the identification range of the circle is displayed according to the identification range information. It is to be understood that other identification range information may be acquired, and the identification range of other shapes may be displayed according to the other identification range information, which is not limited herein.
The position of the recognition range can be determined according to the vertex coordinates of the recognition range, and the size of the recognition range can be determined according to the height and the width of the recognition range. As the video picture changes, the recognition range of the recognized object also changes. Specifically, when the recognized object is large, the recognition range of the recognized object is also large. When the recognized object becomes smaller, the recognition range of the recognized object also becomes smaller in synchronization. In the course of the change, the ratio between the size of the recognized object and the size of the recognition range may remain unchanged.
Alternatively, the identification range and the size of the identified object may not be changed in synchronization, for example, the identification range information is updated every 0.4 second, and then the identification range is displayed based on the identification range information. The update time period is set according to practical situations, such as 0.2 second, 0.3 second, 0.5 second, etc., and is not limited herein. Thereby providing a flexible implementation. In addition, when the recognized object is a scene, the recognition range may not be displayed.
The embodiment can prompt the user of the position and the size of the identified object through the identification range while the user watches the video. For example, indicating the position and size of the identified object for the user in a dark scene or a video picture in which the object is difficult to distinguish enables the user to quickly find the identified object, thereby improving the user viewing experience.
In an optional embodiment, when the playing time is equal to the target recognition time, the N pieces of feature information of the target recognized object are acquired from the recognition information list according to the target recognition time, and the N pieces of feature information of the target recognized object are displayed.
In this embodiment, the identification information list further includes feature information of the identified object. And in the process of playing the video, when the playing time is equal to the target identification time, indicating that the target identified object is displayed on the video picture. And acquiring the characteristic information of the target identified object from the identification information list according to the target identification time, and displaying the characteristic information of the target identified object. The number of the feature information of the target identified object can be recorded as N, and N is a positive integer. For example, the screen of the television play at the 1 st minute includes the character a and the recognition range of the character a, and the actor names of the character a are displayed at the 1 st minute. This enables the identified object and the feature information of the identified object to be displayed simultaneously, enabling the user to obtain more information of the identified object while watching the video.
In addition to displaying the characteristic information of the identified object, information related to the identified object, such as an advertisement or an advertisement link, may be displayed when the identified object is displayed, such as a star related advertisement, such as a star dialect advertisement, may be displayed when the characteristic information of the identified object includes the name of a star. When the identified object is a commodity, a commodity advertisement and a link for purchasing the commodity may be displayed. When the identified object is a scene, a scene-related advertisement or advertisement link may be displayed.
If a plurality of recognized objects exist on the video screen at the same time, only the feature information of a part of the plurality of recognized objects may be displayed. For example, when five persons exist on one video screen, only the feature information of the three persons appearing first and the feature information of the scene are displayed, or only the feature information of the three persons appearing first are displayed. The above examples are illustrative examples and are not intended to limit the number of objects to be recognized or the number of feature information.
In an optional embodiment, the display duration of the characteristic information of the target identified object is obtained; and displaying the characteristic information of the target identified object under the condition that the display time length of the characteristic information of the target identified object is greater than or equal to the preset display time length. And under the condition that the display time length of the characteristic information of the target identified object is greater than the preset display time length, not displaying the characteristic information of the target identified object.
In this embodiment, the difference between the display duration of the feature information and the display duration of the identified object is smaller than the preset threshold, so that the feature information and the identified object can appear synchronously and also disappear synchronously. Under the condition that the display duration of the identified object is less than the preset duration, the feature information disappears after being displayed quickly because the display duration of the feature information is close to the playing duration of the identified object, so that a user is not in time to observe the feature information, and the watching experience of the user is also influenced. Therefore, only when the display duration of the feature information is longer than or equal to the preset duration, the feature information of the target identified object is displayed, and the problem that the user watching experience is influenced due to the fact that the feature information of the target identified object disappears after being displayed quickly is solved. The preset display time period may be, but is not limited to, 3 seconds, and the application is not limited thereto. Alternatively, the display time period of the characteristic information may be a fixed time period, and the value thereof may be set and adjusted by the user.
In an alternative embodiment, displaying the N feature information of the target recognized object includes: a first characteristic information area and a second characteristic information area which are arranged in parallel are arranged at the edge of a video picture, N pieces of characteristic information of a first identified object are displayed in the first characteristic information area, and N pieces of characteristic information of a second identified object are displayed in the second characteristic information area.
In this embodiment, the target recognized object includes a first recognized object and a second recognized object, and the first recognized object and the second recognized object are different. The edge of the video frame refers to a region close to a boundary line of the video frame in the video frame, and specifically may be a bottom, top or side region of the video frame.
When the playing time is equal to the target identification time, a first identified object and a second identified object are simultaneously displayed in the video picture, at the moment, a first characteristic information area and a second characteristic information area which are arranged in parallel are arranged at the edge of the video picture, the characteristic information of the first identified object is displayed in the first characteristic information area, and the characteristic information of the second identified object is displayed in the second characteristic information area. Specifically, when a plurality of recognized objects exist in the video screen and the feature information areas of the plurality of recognized objects are displayed at the bottom of the video screen, the user can view the feature information at the bottom of the video screen.
When more than three identified objects are displayed simultaneously in the video picture, other feature information areas arranged in parallel can be arranged at the edge of the video picture to display the feature information of other identified objects, and the feature information areas and the identified objects can be in one-to-one correspondence. The number of the characteristic information areas is not limited in the present application. The number of pieces of feature information of each of the plurality of identified objects may be the same or different.
This makes it possible to simultaneously display the feature information of a plurality of recognized objects at the edge of the video screen. Moreover, because the plurality of characteristic information areas are arranged in parallel, the characteristic information of the plurality of recognized objects does not overlap, and therefore, the characteristic information of each recognized object is convenient for a user to view.
Further, in another optional embodiment, the method further includes: setting a background color of the first characteristic information area and a border line color of an identification range of the first identified object as a first color; the background color of the second characteristic information area and the border line color of the recognition range of the second recognized object are set to a second color, and the first color is different from the second color.
In this embodiment, each identified object may be configured with a designated color, and the colors corresponding to the identified objects are different, that is, the background color of the characteristic information area and the boundary color of the identification range of each identified object are set as the designated colors. For example, the background color of the feature information region of the first recognized object and the color of the recognition range boundary line are both set to blue, and the background color of the feature information region of the second recognized object and the color of the recognition range boundary line are both set to pink. It is understood that in the feature information area, other feature information of the identified object may also be displayed as a designated color, for example, a font color in the feature information area of the first identified object is blue, or an edge color of the feature information area is blue. Therefore, different identified objects can be distinguished through colors, and the characteristic information of the identified objects can be quickly observed according to the colors. It will be appreciated that other information may be provided for distinguishing between different identified objects, for example each identified object being provided with an identifier, such as a number, symbol or graphic.
In another optional embodiment, the method further comprises: determining a first separation line and a second separation line which are vertically intersected with a preset separation point; determining a left region and a right region separated by the first separation line in a video picture of the video; determining a first video region and a third video region separated by the second separation line in the left region and determining a second video region and a fourth video region separated by the second separation line in the right region;
displaying the N pieces of feature information of the target recognized object includes: determining a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
The feature information area may be a closed figure pointing to the recognized object, in which feature information is displayed. The characteristic information region may be disposed around the recognition range. Specifically, the characteristic information area may be located on the left side, right side, upper side, or lower side of the recognition range, and may also be located on the upper left side, upper right side, lower left side, or lower right side of the recognition range. In the video screen, the feature information area and the recognition range may be displayed at the same time, or the feature information area may be displayed without displaying the recognition range.
Optionally, when the identification range is a rectangle, the feature information area is connected to a vertex of the identification range through a connection line. Specifically, the recognition range may have four vertices, and the feature information area may be connected to any one of the four vertices. The connecting line may be, but is not limited to, a straight line, a broken line, or a curved line.
Optionally, the minimum distance between the feature information area and the identification range is less than or equal to a preset distance. Therefore, the characteristic information area can be displayed along with the identification range and the identified object, and a user can conveniently view the identified object and the characteristic information.
In this embodiment, the separation point may be any point in the middle area of the video frame, such as the center point or a point away from the center of the video frame. The first separation line and the second separation line perpendicularly intersect at the separation point. Optionally, the first separation line is a vertical line, and the second separation line is a horizontal line. It is understood that the first and second separation lines may also be diagonal lines.
For a video picture, a left region and a right region of the video picture can be determined from a first dividing line, a first video region and a third video region can be determined from a second dividing line in the left region, and a second video region and a fourth video region can be determined from the second dividing line in the right region. Alternatively, an upper area and a lower area of the video picture are determined based on the second dividing line, a first video area and a second video area separated by the first dividing line may be determined in the upper area, and a third video area and a fourth video area separated by the first dividing line may be determined in the lower area.
The identification range information includes positioning coordinates of the identification range, from which the target video area can be determined. Specifically, the vertex at the upper left corner of the video picture is (0, 0), the positioning coordinates are (x1, y1), and the separation point is (x0, y 0). If x1> x0, and y1< y0, then the location coordinates are determined to be in the first video area; if x1< x0, and y1< y0, then the location coordinates are determined to be in the second video area; if x1 is not less than x0 and y1 is not less than y0, determining that the positioning coordinate is located in the third video area; if x1 ≧ x0, and y1 ≧ y0, it is determined that the position fix coordinate is located in the fourth video area.
The correspondence between the video area and the relative position relationship can be as shown in table 1:
video area Relative positional relationship
First video area The characteristic information region is located at the lower right side of the recognition range
Second video area The characteristic information region is located at the lower left side of the recognition range
A third video area The characteristic information region is located at the upper right side of the recognition range
Fourth video area The characteristic information region is located at the upper left side of the recognition range
TABLE 1
After the relative position relationship corresponding to the target video area is determined, the characteristic information area is set according to the relative position relationship, so that the characteristic information area can be positioned in the video picture. It is understood that the correspondence between the video area and the relative position relationship can also be set according to table 2:
Figure BDA0002147960940000111
Figure BDA0002147960940000121
TABLE 2
It is to be understood that the video area and the relative position relationship may be set in other manners, and are not limited herein.
A specific example of the preset correspondence between the positioning coordinates of the recognition range and the video area will be described below. Referring to fig. 2A, a first separation line 25 and a second separation line 26 perpendicular to each other are determined according to the separation points, and a video picture can be divided into four video regions by the first separation line 25 and the second separation line 26: a first video area 21, a second video area 22, a third video area 23, and a fourth video area 24.
The location coordinates 28 are exemplified by the coordinates of the top left corner vertex of the recognition range 27.
Referring to fig. 2A, when the vertex coordinates 28 are in the first video area 21, the feature information area 29 is disposed on the right side of the recognition range 27 or on the lower right side of the recognition range 27. At this time, the feature information region 29 may connect the upper right vertices of the recognition range 27, or connect the lower right vertices.
Referring to fig. 2B, when the vertex coordinates 28 are in the second video area 22, the feature information area 29 is set on the left side of the recognition range 27 or on the lower left side of the recognition range 27.
Referring to fig. 2C, when the vertex coordinates 28 are in the third video area 23, the feature information area 29 is disposed on the right side of the recognition range 27 or on the upper right side of the recognition range 27.
Referring to fig. 2D, when the vertex coordinates 28 are in the fourth video area 24, the feature information area 29 is set on the left side of the recognition range 27 or on the upper left side of the recognition range 27.
It is to be understood that the characteristic information region 29 may be disposed at other positions around the recognition range 27 in order to dispose the characteristic information region within the video screen. In addition, the correspondence relationship between other vertex coordinates and the video region is similar to the correspondence relationship between the vertex coordinates 28 and the video region, and is not described herein again. The video picture can be divided into more regions by more dividing lines, and the feature information region 29 can also be set within the video picture with reference to the above method.
In another example, the positioning coordinates 28 are taken as the center of the recognition range 27, and the feature information area 29 may be set as follows.
Referring to fig. 3A, when the center coordinates 28 are in the first video area 21, the feature information area 29 is disposed on the right side of the recognition range 27 or on the lower right side of the recognition range 27.
Referring to fig. 3B, when the center coordinates 28 are in the second video area 22, the feature information area 29 is disposed on the left side of the recognition range 27 or on the lower left side of the recognition range 27.
Referring to fig. 3C, when the center coordinates 28 are in the third video area 23, the feature information area 29 is disposed on the right side of the recognition range 27 or on the upper right side of the recognition range 27.
Referring to fig. 3D, when the center coordinates 28 are in the fourth video area 24, the feature information area 29 is disposed on the left side of the recognition range 27 or on the upper left side of the recognition range 27.
In another alternative embodiment, displaying the N pieces of feature information of the target recognized object includes: determining a first separation line, a second separation line, a third separation line and a fourth separation line according to the identification range information of the target identified object; dividing the video picture without the identification range of the target identified object into eight video areas according to the first separation line, the second separation line, the third separation line and the fourth separation line, and respectively determining the height and the width of each video area; and selecting one video area from the eight video areas, if the height of the selected video area is greater than the preset height and the width of the selected video area is greater than the preset width, setting a characteristic information area in the selected video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
In this embodiment, the determining the four separation lines according to the identification range information of the target identified object may specifically be: four separation lines through the four vertices are determined by identifying the coordinates of the four vertices of the range. The four separation lines satisfy the following relationship: the first separation line and the second separation line are parallel, and the third separation line and the fourth separation line are parallel. And the first separation line and the third separation line are vertically intersected at the top left corner vertex of the identification range, the second separation line and the third separation line are vertically intersected at the bottom left corner vertex of the identification range, the first separation line and the fourth separation line are vertically intersected at the top right corner vertex of the identification range, and the second separation line and the fourth separation line are vertically intersected at the bottom right corner vertex of the identification range.
The preset height is the height of the characteristic information area, and the preset width is the width of the characteristic information area. If the height of the selected video area is larger than the preset height and the width of the selected video area is larger than the preset width, the fact that the selected video area can contain the characteristic information area is indicated, the characteristic information area is set in the selected video area, and the characteristic information of the target identified object is displayed in the characteristic information area.
If the height of the selected video area is smaller than or equal to the preset height or the width of the selected video area is smaller than or equal to the preset width, the fact that the selected video area cannot contain the characteristic information area is indicated, the video areas are continuously selected from the remaining seven video areas, the height of the selected video area is compared with the preset height, and the width of the selected video area is compared with the preset height until the video area capable of containing the characteristic information area is selected.
In this way, the feature information of a plurality of recognized objects can be displayed simultaneously in the edge area of the video picture, and the user can move in the feature information through the direction key (such as the left direction key or the right direction key) on the remote controller. When the user presses a return key on the remote controller, the feature information of all the recognized objects can be hidden. When the user presses an OK key or a OK key on the remote controller after selecting the feature information of an identified object, the user may jump to a page related to the identified object, for example, a web page related to the identified object.
In another optional embodiment, the method further comprises: pausing playing the video according to the pause instruction; if the paused video picture comprises a plurality of recognized objects, calculating the distance between the positioning coordinate of the recognition range corresponding to each recognized object and the preset coordinate, determining a first target recognized object according to the calculated minimum distance, and displaying the M pieces of characteristic information of the first target recognized object on the paused video picture. M is a positive integer greater than N.
In this embodiment, the preset coordinates may be coordinates of the center of the video frame, or may be other coordinates in the video frame, for example, when the video resolution is 1920 × 1080, coordinates (960, 540) are selected as the preset coordinates. Alternatively, when the video resolution is 1280 × 720, the coordinates (640, 360) are selected as the preset coordinates. It can be understood that, according to the situation that the resolution of the video is reduced, the distance from the preset coordinates to each vertex of the video picture can be reduced in proportion; in the case of video resolution enlargement, the distance from the preset coordinates to each vertex of the video picture can be enlarged in proportion, so that the relative position of the preset coordinates in the video picture can be kept unchanged.
It should be noted that, in addition to determining the first target recognized object from the plurality of recognized objects in the above manner, the first target recognized object may be selected in the following manner. For example, one identified object is randomly selected from among a plurality of identified objects as the first target identified object. Alternatively, the leftmost recognized object, the rightmost recognized object, the uppermost recognized object or the lowermost recognized object is selected from the plurality of recognized objects as the first target recognized object.
If the video frame includes a plurality of identified objects, N pieces of feature information of each identified object may be displayed on the video frame. When the video is paused, one recognized object may be determined from among the plurality of recognized objects as a first target recognized object, and then M pieces of feature information of the first target recognized object may be displayed. Since the M pieces of feature information are more than the N pieces of feature information, the user can view more feature information. For example, the N feature information includes a class identifier and a name, and the M feature information includes, but is not limited to, a thumbnail of the identified object, the class identifier and the name. In another example, the identified object is a star nail, the N pieces of feature information include a name of the star nail and a gender of the star nail, and the M pieces of feature information include a name of the star nail, a gender of the star nail, a thumbnail of the star nail, an age of the star nail, and the like. In another example, the identified object is item a, the N pieces of feature information include the name of item a and the price of item a, and the M pieces of feature information include the name of item a, the price of item a, a thumbnail of item a, and seller information of item a.
Note that, before pausing the playback of the video in accordance with the pause instruction, when a plurality of recognized objects exist in the video, a prompt message may also be displayed in the video screen. The prompt message can be words, symbols or pictures, and the prompt message is used for instructing a user to perform an operation to trigger the step of pausing the playing of the video according to the pause instruction. In addition, the prompt information can be broadcasted through voice, or the prompt information can be displayed and broadcasted through voice. For example, when a star and a commodity are displayed, the reminder information may be: find the star face/the same money, press the key to learn more. The key corresponding to the prompt message is an 'up' key in the direction keys, and after the user presses the 'up' key, the video playing is suspended.
Based on the previous embodiment, in another optional embodiment, after displaying M pieces of feature information of the first target recognized object in the paused video picture, the method further includes:
and determining a second target identified object according to the selection instruction, and displaying M pieces of characteristic information of the second target identified object.
In this embodiment, for other recognized objects (for example, the third target recognized object) in the paused video picture, the corresponding M pieces of feature information may also be displayed. In the paused video picture, the user can select any one of the recognized objects through the direction keys of the remote controller, thereby obtaining more detailed information.
It should be noted that after the second target recognized object is determined according to the selection instruction, N pieces of feature information of the first target recognized object may also be displayed. In this way, the number of pieces of feature information of only the second object to be recognized is M on the video screen, and the number of pieces of feature information of other objects to be recognized (e.g., the first object to be recognized) is N. Thereby, detailed information of the second target recognized object and less information of other recognized objects can be displayed.
In addition, after the M pieces of feature information of the identified object are viewed, the video may be replayed according to the return instruction.
Optionally, during the video playing process, each identified object may not be displayed any more after the first display. Thus, the user does not view the repetitive characteristic information of one recognized object. It should be noted that the number of times of displaying the feature information may also be set according to actual situations, and may be no longer displayed after one time, or may be no longer displayed after two times of displaying, and the specific number of times is not limited.
In another optional embodiment, the method further comprises: and sequentially displaying the characteristic information of all the identified objects displayed before the preset time from the preset time.
In this embodiment, the preset time may be a start time of an end period of the video, for example, 30 th last video or 60 th last video. The preset timing is not limited to the above example.
The characteristic information of all the displayed identified objects can be acquired before the preset time; starting from the preset time, the feature information of all the identified objects displayed before the preset time can be sequentially displayed according to the sequence of displaying the feature information of different identified objects. Thus, another method for displaying the characteristic information of the identified object is provided, the characteristic information of all the identified objects can be displayed at the end of the video clip, and the identified objects and the characteristic information thereof are not limited to be displayed synchronously.
It should be noted that the feature information of all the recognized objects that have been displayed before the preset time may also be displayed in a random order. It is understood that the feature information of all the recognized objects may be displayed in a reverse order, or simultaneously, or in other manners according to actual conditions.
Referring to fig. 4, the present application provides a display device 400, which can implement the method for displaying video identification information in the embodiment or the alternative embodiment shown in fig. 1. One embodiment of the display device 400 includes:
the processing module 401 is configured to obtain an identification information list corresponding to the video, where the identification information list includes identification information of at least one identified object, and the identification information of each identified object includes at least one identification time and identification range information of the identified object at each identification time, and the identification time represents a time at which each identified object is identified in the video;
the processing module 401 is further configured to obtain a playing time according to the playing progress of the video;
a display module 402, configured to, when the playing time is equal to the target identification time, obtain identification range information of the target identified object from the identification information list according to the target identification time, and display the identification range of the target identified object according to the identification range information of the target identified object.
In an optional embodiment, the display module 402 is further configured to, in a case that the identification information list further includes feature information of the identified object, when the playing time is equal to the target identification time, obtain N pieces of feature information of the target identified object from the identification information list according to the target identification time, and display the N pieces of feature information of the target identified object, where N is a positive integer.
In another optional embodiment, the display module 402 is specifically configured to set a first feature information area and a second feature information area arranged in parallel at an edge of the video picture, display N pieces of feature information of the first identified object in the first feature information area, and display N pieces of feature information of the second identified object in the second feature information area.
In another optional embodiment, the display module 402 is further configured to set a background color of the first characteristic information area and a border line color of the recognition range of the first recognized object to a first color; the background color of the second characteristic information area and the border line color of the recognition range of the second recognized object are set to a second color, and the first color is different from the second color.
In a further alternative embodiment of the method,
the processing module 401 is further configured to obtain a first separation line and a second separation line according to a preset separation point, where the first separation line and the second separation line are perpendicular to each other and intersect at the separation point; dividing the video picture of the video into a plurality of video areas according to a first separation line and a second separation line;
a display module 402, specifically configured to determine a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
In a further alternative embodiment of the method,
the processing module 401 is further configured to pause playing of the video according to the pause instruction;
the display module 402 is further configured to, if the paused video picture includes a plurality of identified objects, calculate a distance between the positioning coordinate of the identification range corresponding to each identified object and the preset coordinate, determine the first target identified object according to the calculated minimum distance, and display M pieces of feature information of the first target identified object on the paused video picture, where M is a positive integer greater than N.
In a further alternative embodiment of the method,
the display module 402 is further configured to sequentially display, from a preset time, feature information of all identified objects that have been displayed before the preset time.
Referring to fig. 5, the present application provides a terminal 500 capable of implementing the method for displaying video identification information in the embodiment or the alternative embodiment shown in fig. 1. One embodiment of the terminal 500 includes:
an input unit 501, a processor 502, a display unit 503, and a memory 504; the input unit 501, the processor 502, the display unit 503, and the memory 504 are connected by a bus;
the input unit 501 is used to receive input from a user. The input unit may be a mouse, a keyboard, a touch screen device or a sensing device, etc.
The Processor 502 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The display unit 503 is used to display information. The display unit may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like.
The memory 504 may be, or include, volatile memory and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The block diagram given in the present embodiment shows only a simplified design of the terminal 500. In practical applications, the terminal 500 may include any number of input units 501, processors 502, display units 503, memories 504, etc. to implement the functions or operations performed by the terminal 500 in the embodiments of the apparatus of the present application, and all apparatuses capable of implementing the present application are within the protection scope of the present application. Although not shown, the terminal 500 may further include an audio unit, a power supply, a sensor, a bluetooth unit, a WiFi unit, a camera, and the like.
In this embodiment, the memory 504 is used to store programs. In particular, the program may include program code comprising computer operating instructions. The processor 502 executes the program code stored in the memory 504 to perform the steps in the embodiment shown in fig. 1 or an alternative embodiment.
The display apparatus 400 or the terminal 500 of the present application may be a smart tv, a tv box, or a projector, or may be a component of the smart tv, the tv box, or the projector for executing the display method of the video identification information of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A method for displaying video identification information, comprising:
acquiring an identification information list corresponding to a video, wherein the identification information list comprises identification information of at least one identified object, the identification information of each identified object comprises at least one identification moment and identification range information corresponding to each identification moment of the identified object, and the identification moment represents the moment when each identified object is identified in the video;
acquiring playing time according to the playing progress of the video;
when the playing time is equal to the target identification time, acquiring identification range information corresponding to a target identified object from the identification information list according to the target identification time, and displaying the identification range of the target identified object according to the identification range information corresponding to the target identified object;
pausing the playing of the video according to a pause instruction;
if the paused video picture comprises a plurality of identified objects, calculating the distance between the positioning coordinate of the identification range corresponding to each identified object and a preset coordinate, determining a first target identified object according to the calculated minimum distance, and displaying M pieces of characteristic information of the first target identified object on the paused video picture, wherein M is a positive integer greater than N;
the N is: the identification information list also comprises characteristic information of the identified object, when the playing time is equal to the target identification time, N pieces of characteristic information of the target identified object are obtained from the identification information list according to the target identification time, and the N pieces of characteristic information of the target identified object are displayed, wherein N is a positive integer.
2. The method of claim 1,
the target recognized object comprises a first recognized object and a second recognized object;
the displaying the N pieces of feature information of the first recognized object comprises: a first characteristic information area and a second characteristic information area which are arranged in parallel are arranged at the edge of a video picture, N pieces of characteristic information of the first identified object are displayed in the first characteristic information area, and N pieces of characteristic information of the second identified object are displayed in the second characteristic information area.
3. The method of claim 2, further comprising:
setting a background color of the first characteristic information region and a border line color of an identification range of the first identified object as a first color;
setting a color of a boundary line between a background color of the second feature information area and an identification range of the second identified object as a second color, the first color being different from the second color.
4. The method of claim 1, further comprising: determining a first separation line and a second separation line which are vertically intersected with a preset separation point; determining a left region and a right region separated by the first separation line in a video picture of the video; determining a first video region and a third video region separated by the second dividing line in the left region, and determining a second video region and a fourth video region separated by the second dividing line in the right region;
the displaying the N pieces of feature information of the target recognized object comprises: determining a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
5. The method according to any one of claims 2 to 4, further comprising:
and sequentially displaying the characteristic information of all the identified objects displayed before the preset time from the preset time.
6. A display device, comprising:
the processing module is used for acquiring an identification information list corresponding to a video, wherein the identification information list comprises identification information of at least one identified object, the identification information of each identified object comprises at least one identification moment and identification range information of the identified object at each identification moment, and the identification moment represents the moment when each identified object is identified in the video;
the processing module is further used for acquiring playing time according to the playing progress of the video;
the display module is used for acquiring the identification range information of the target identified object from the identification information list according to the target identification time when the playing time is equal to the target identification time, and displaying the identification range of the target identified object according to the identification range information of the target identified object;
the processing module is further used for pausing the playing of the video according to a pause instruction;
the display module is further configured to calculate a distance between a positioning coordinate of an identification range corresponding to each identified object and a preset coordinate if the paused video picture includes a plurality of identified objects, determine a first target identified object according to the calculated minimum distance, and display M pieces of feature information of the first target identified object on the paused video picture, where M is a positive integer greater than N;
the N is: the identification information list further comprises characteristic information of the identified object, when the playing time is equal to the target identification time, N pieces of characteristic information of the target identified object are obtained from the identification information list according to the target identification time, and the N pieces of characteristic information of the target identified object are displayed, wherein N is a positive integer.
7. The display device according to claim 6,
the display module is specifically configured to set a first feature information area and a second feature information area, which are arranged in parallel, at an edge of a video picture, display N pieces of feature information of a first identified object in the first feature information area, and display N pieces of feature information of a second identified object in the second feature information area.
8. The display device according to claim 7,
the display module is further configured to set a background color of the first characteristic information region and a border line color of the identification range of the first identified object as a first color; setting a color of a boundary line between a background color of the second feature information area and an identification range of the second identified object as a second color, the first color being different from the second color.
9. The display device according to claim 6, wherein the processing module is further configured to determine a first separation line and a second separation line that perpendicularly intersect at a predetermined separation point; determining a left region and a right region separated by the first separation line in a video picture of the video; determining a first video region and a third video region separated by the second separation line in the left region and determining a second video region and a fourth video region separated by the second separation line in the right region;
the display module is specifically used for determining a target video area corresponding to the positioning coordinates of the identification range; determining a relative position relation corresponding to the target video area according to a preset corresponding relation between the video area and the relative position relation, wherein the relative position relation represents the position of the characteristic information area relative to the identification range; and setting a characteristic information area according to the relative position relation corresponding to the target video area, and displaying N pieces of characteristic information of the target identified object in the characteristic information area.
10. The display device according to any one of claims 6 to 9,
the display module is further configured to sequentially display, from a preset time, the feature information of all the identified objects displayed before the preset time.
11. A computer storage medium comprising instructions that, when run on a computer, perform the method of any one of claims 1 to 5.
CN201910691301.6A 2019-07-29 2019-07-29 Display method of video identification information and related equipment Active CN110392310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910691301.6A CN110392310B (en) 2019-07-29 2019-07-29 Display method of video identification information and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910691301.6A CN110392310B (en) 2019-07-29 2019-07-29 Display method of video identification information and related equipment

Publications (2)

Publication Number Publication Date
CN110392310A CN110392310A (en) 2019-10-29
CN110392310B true CN110392310B (en) 2022-06-03

Family

ID=68287649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910691301.6A Active CN110392310B (en) 2019-07-29 2019-07-29 Display method of video identification information and related equipment

Country Status (1)

Country Link
CN (1) CN110392310B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979741A (en) * 2021-02-20 2022-08-30 腾讯科技(北京)有限公司 Method and device for playing video, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686413A (en) * 2013-12-19 2014-03-26 宇龙计算机通信科技(深圳)有限公司 Auxiliary display method and device
CN104486680A (en) * 2014-12-19 2015-04-01 珠海全志科技股份有限公司 Video-based advertisement pushing method and system
CN107122093A (en) * 2017-02-24 2017-09-01 北京悉见科技有限公司 Message box display methods and device
CN108156504A (en) * 2017-12-20 2018-06-12 浙江大华技术股份有限公司 A kind of image display method and device
CN108174270A (en) * 2017-12-28 2018-06-15 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN109391848A (en) * 2017-08-03 2019-02-26 掌游天下(北京)信息技术股份有限公司 A kind of interactive advertisement system
CN109495780A (en) * 2018-10-16 2019-03-19 深圳壹账通智能科技有限公司 A kind of Products Show method, terminal device and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10674230B2 (en) * 2010-07-30 2020-06-02 Grab Vision Group LLC Interactive advertising and marketing system
JP5857450B2 (en) * 2011-05-30 2016-02-10 ソニー株式会社 Information processing apparatus, information processing method, and program
CN103702226A (en) * 2013-12-31 2014-04-02 广州华多网络科技有限公司 Displaying method and device of direct broadcasting client end channel information
CN108471551A (en) * 2018-03-23 2018-08-31 上海哔哩哔哩科技有限公司 Video main information display methods, device, system and medium based on main body identification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686413A (en) * 2013-12-19 2014-03-26 宇龙计算机通信科技(深圳)有限公司 Auxiliary display method and device
CN104486680A (en) * 2014-12-19 2015-04-01 珠海全志科技股份有限公司 Video-based advertisement pushing method and system
CN107122093A (en) * 2017-02-24 2017-09-01 北京悉见科技有限公司 Message box display methods and device
CN109391848A (en) * 2017-08-03 2019-02-26 掌游天下(北京)信息技术股份有限公司 A kind of interactive advertisement system
CN108156504A (en) * 2017-12-20 2018-06-12 浙江大华技术股份有限公司 A kind of image display method and device
CN108174270A (en) * 2017-12-28 2018-06-15 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN109495780A (en) * 2018-10-16 2019-03-19 深圳壹账通智能科技有限公司 A kind of Products Show method, terminal device and computer readable storage medium

Also Published As

Publication number Publication date
CN110392310A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
US10368123B2 (en) Information pushing method, terminal and server
US9674582B2 (en) Comment-provided video generating apparatus and comment-provided video generating method
US20160366466A1 (en) Method for displaying bullet screen of video, and electronic device
US20170223430A1 (en) Methods and apparatus for content interaction
US20230154008A1 (en) Methods and Systems for Scoreboard Region Detection
US20240137584A1 (en) Methods and Systems for Extracting Sport-Related Information from Digital Video Frames
US20230298275A1 (en) Method, apparatus and system for facilitating navigation in an extended scene
EP3425483B1 (en) Intelligent object recognizer
US11830261B2 (en) Methods and systems for determining accuracy of sport-related information extracted from digital video frames
US20130198766A1 (en) Method for providing user interface and video receiving apparatus thereof
US12010359B2 (en) Methods and systems for scoreboard text region detection
US11798279B2 (en) Methods and systems for sport data extraction
CN111683267A (en) Method, system, device and storage medium for processing media information
CN111954045A (en) Augmented reality device and method
CN110881134A (en) Data processing method and device, electronic equipment and storage medium
JP2014041433A (en) Display device, display method, television receiver, and display control device
CN110392310B (en) Display method of video identification information and related equipment
CN114564131B (en) Content publishing method, device, computer equipment and storage medium
JP2016090727A (en) Content output apparatus and program
CN112954437A (en) Video resource processing method and device, computer equipment and storage medium
CN107578306A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
US11523171B2 (en) Information processing device
CN115734014A (en) Video playing method, processing method, device, equipment and storage medium
JP2018010431A (en) Image processing device, image processing method, and program
US8687012B2 (en) Information processing apparatus, storage medium, information processing method and information processing system for adjusting a position of an object to be imaged on a display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant