CN109587572B - Method and device for displaying product, storage medium and electronic equipment - Google Patents

Method and device for displaying product, storage medium and electronic equipment Download PDF

Info

Publication number
CN109587572B
CN109587572B CN201811518879.3A CN201811518879A CN109587572B CN 109587572 B CN109587572 B CN 109587572B CN 201811518879 A CN201811518879 A CN 201811518879A CN 109587572 B CN109587572 B CN 109587572B
Authority
CN
China
Prior art keywords
video frame
video
playing
display
operation instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811518879.3A
Other languages
Chinese (zh)
Other versions
CN109587572A (en
Inventor
王诸宏伟
李作静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinguazi Technology Development Beijing Co ltd
Original Assignee
Jinguazi Technology Development Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinguazi Technology Development Beijing Co ltd filed Critical Jinguazi Technology Development Beijing Co ltd
Priority to CN201811518879.3A priority Critical patent/CN109587572B/en
Publication of CN109587572A publication Critical patent/CN109587572A/en
Application granted granted Critical
Publication of CN109587572B publication Critical patent/CN109587572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method, a device, a storage medium and electronic equipment for displaying products, wherein the method comprises the following steps: when a display video of a target product is played, if a first operation instruction for controlling the display video to pause playing is received, pausing the playing of the display video and generating a video frame group; receiving a second operation instruction, wherein the second operation instruction comprises: the first gesture is used for controlling the video frames in the video frame group to be played continuously, and the first gesture is also used for indicating the playing sequence of the video frames in the video frame group; and responding to the second operation instruction, and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture. By the method, the device, the storage medium and the electronic equipment for displaying the products, provided by the embodiment of the invention, the individual characteristics of each product can be displayed, a user can conveniently check the details of the products, and the display effect is more precise and accurate.

Description

Method and device for displaying product, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of product display, in particular to a method and a device for product display, a storage medium and electronic equipment.
Background
With the development of internet technology, online transactions are becoming more and more popular. In the process of online transaction, the seller first needs to display the product on the network for the buyer to browse. The existing general display mode is display through pictures, for example, when the product is an automobile, a group of photos at a certain angle of the appearance and the interior of the automobile can be displayed, and each photo displays the automobile condition at a corresponding angle. Since the angle shown by the picture is limited, the technology cannot smoothly show the vehicle condition of each angle of the vehicle.
In order to display the product, the following two ways can be generally adopted:
1. and (5) video display mode. An omnidirectional video is recorded to show the product. This approach, while allowing for more angular display of the product, does not allow for detailed display of product details.
2. And 3D rendering display mode. And (3) exporting a plurality of pictures (for example, 24 pictures) after rendering by using 3D software, and then synthesizing to generate and display a 3D rendering image of the product. Although the mode can show product details, the rendering cost is high, the mode is only suitable for showing batch products, for example, all new cars of a certain model are shown by the same 3D rendering graph, the individuality characteristics of the products cannot be embodied, for example, the appearance defects of second-hand cars (scratch, collision dent, paint surface problems and the like) cannot be embodied.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a method, an apparatus, a storage medium, and an electronic device for displaying a product.
In a first aspect, an embodiment of the present invention provides a method for displaying a product, including:
when a display video of a target product is played, if a first operation instruction for controlling the display video to pause playing is received, pausing the playing of the display video and generating a video frame group, wherein the video frame group comprises: the video playing method comprises the steps that a current video frame corresponding to a playing pause action, at least one first video frame played before the playing pause action and at least one second video frame played after the playing pause action are paused;
receiving a second operation instruction, wherein the second operation instruction comprises: a first gesture for controlling continuous playing of video frames in the group of video frames, the first gesture further being for indicating a playing order of the video frames in the group of video frames;
and responding to the second operation instruction, and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture.
In a second aspect, an embodiment of the present invention further provides an apparatus for displaying a product, including:
a generating module, configured to, when a display video of a target product is played, pause playing of the display video if a first operation instruction for controlling the display video to pause playing is received, and generate a video frame group, where the video frame group includes: the video playing method comprises the steps that a current video frame corresponding to a playing pause action, at least one first video frame played before the playing pause action and at least one second video frame played after the playing pause action are paused;
the first operation module is used for receiving a second operation instruction, and the second operation instruction comprises: a first gesture for controlling continuous playing of video frames in the group of video frames, the first gesture further being for indicating a playing order of the video frames in the group of video frames;
and the display module is used for responding to the second operation instruction and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture.
In a third aspect, an embodiment of the present invention further provides a storage medium, where the storage medium stores computer-executable instructions, and the computer-executable instructions are used in any one of the above methods for displaying a product
In a fourth aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of displaying a product as described in any one of the preceding claims
In the solution provided by the foregoing first aspect of the embodiments of the present invention, after determining a current video frame when a user plays temporarily, a video frame group that displays a video is extracted and a target product is generated, and the user selects a position where the user pays attention to by operating the video frame group and displays a video frame image at the position to facilitate the user to view details of the product. According to the method, 3D rendering is not needed, the cost for obtaining the display video is low, the video frame group of each product can be conveniently and quickly generated, and the individual characteristics of each product can be displayed; the product is displayed to the user in a mode of displaying the video frame, so that the user can conveniently check the product details, and the display effect is more precise and accurate.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a flow chart of a method of displaying a product provided by an embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific method for generating a video frame group in a method for displaying a product according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a captured presentation video according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another embodiment of the present invention for capturing a presentation video;
fig. 4a is a schematic diagram illustrating imaging principles of the imaging apparatus provided by the embodiment of the present invention;
fig. 4b is a schematic diagram illustrating imaging principles when the image pickup apparatus provided by the embodiment of the present invention shakes;
FIG. 5 illustrates a flow chart of another method of displaying a product provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of an apparatus for displaying products according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another apparatus for displaying products according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of another apparatus for displaying a product according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device for executing a method for displaying a product according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
According to the method for displaying the product, provided by the embodiment of the invention, the product is displayed by extracting the video frame in the product display video, so that the cost is low, and the product details can be displayed. Referring to fig. 1, the method includes:
step 101: when the display video of the target product is played, if a first operation instruction for controlling the display video to pause playing is received, the display video is paused to be played, and a video frame group is generated, wherein the video frame group comprises: the video playing method comprises the steps of playing a current video frame corresponding to a pause playing action, playing at least one first video frame before the pause playing action, and playing at least one second video frame after the pause playing action.
In the embodiment of the invention, the target product is a product to be displayed, such as a product which needs to be sold in an online shopping mall, and the display video of the target product can be acquired through the camera shooting equipment, and the camera shooting equipment can be a camera, a single lens reflex camera, a video recorder, a smart phone with a camera shooting function and the like. Specifically, when the display video of the target product is collected, the display video of the target product is shot along a preset path.
In the embodiment of the present invention, the entire display video of the target product may be shot along a preset path, or multiple display videos of the target product may be shot along multiple preset paths, for example, a first display video is shot along a first preset path, and then a second display video is shot along a second preset path, and the multiple display videos are combined into the total display video of the target product. The preset path refers to a relative movement path between the target product and the camera device when the display video is shot. Before the "playing the display video of the target product" in step 101, the display video of the target product needs to be acquired, and the process of acquiring the display video may include:
when the target product moves along the first shooting path, the camera equipment at the fixed position collects a display video of the target product; or the display video of the target product at the fixed position is collected by the camera equipment moving along the second shooting path. In the embodiment of the invention, the first shooting path and the second shooting path are paths between the camera device or the target product relative to other static reference objects.
For example, the target product is placed and then stands still, and the image pickup device can acquire a display video of the target product along a horizontal circumferential path by rotating for a circle (360 °) in the horizontal direction; or the camera shooting equipment is still, the target product is placed on the rotating platform to rotate for a circle, and the display video on the circular path can also be acquired. By shooting the first display video of the target product along the first preset path, images of the target product at different positions or at different angles can be acquired.
In the embodiment of the invention, after the display video is acquired, the display video can be played based on a plurality of modes such as a client of a user or a webpage, and when the display video is played, the user can input a first operation instruction for pausing playing to select the concerned current video frame. After the current video frame is determined, a plurality of video frames can be extracted from the display video, wherein the video frames comprise at least one first video frame before the current video frame and at least one second video frame after the current video frame, and different first video frames (or second video frames) correspond to different positions or different angles of the target product; and generating a video frame group of the target product based on the extraction of all the video frames (including the first video frame and the second video frame) and the current video frame selected by the user. The plurality of first video frames and the plurality of second video frames can be uniformly extracted from the display video according to a preset rule. For example, if the display video is a video captured by the camera device rotating around the target product on a horizontal plane, each frame in the display video corresponds to an angle at which the target product is displayed. At this time, the video can be uniformly divided into 360 shares, and the first video frame in each share is used as the extracted first video frame or second video frame; or extract a video frame at regular intervals (e.g., 1 second) as the first video frame or the second video frame.
Or after the current video frame is determined, a plurality of video frames are extracted from the display video according to a preset selection rule by taking the current video frame as a reference. For example, the "preset selection rule" extracts a video frame 10 frames apart, and if the current video frame is the 12 th frame, extracts the 2 nd frame, the 22 nd frame, the 32 nd frame, and the like in the first display video as the extracted first video frame. The extracted different video frames can display real images of the target product at different angles.
Step 102: receiving a second operation instruction, wherein the second operation instruction comprises: the first gesture is used for controlling the video frames in the video frame group to continuously play, and the first gesture is also used for indicating the playing sequence of the video frames in the video frame group.
In the embodiment of the present invention, after a video frame group is generated, a video frame in the video frame group may be displayed, for example, the video frame group is displayed in a web page or an APP, and the video frame group is operable, a user may input a second operation instruction to form a first gesture for controlling continuous playing of the video frame group, where the first gesture may specifically be a horizontal gesture, a vertical gesture, or a gesture in other oblique directions, and a playing order of video frames in the video frame group may be determined according to a direction corresponding to the first gesture. For example, the video frame group includes video frames of a target product shot by the camera device at multiple positions in a stereoscopic space, and if the first gesture is a horizontal rightward sliding gesture, a video frame corresponding to the sliding direction of the first gesture is selected from the video frame group, and the selected video frame is played in a rightward sliding sequence, so that stereoscopic display of the target product is achieved.
Optionally, the second operation instruction may specifically be a movement instruction, a rotation instruction, and the like. After receiving the second operation instruction, the image displayed by the video frame group changes, and the pose parameters corresponding to the video frame group change, where the pose parameters may include position coordinates and angles. For example, the display video is a video shot around the used car for one circle, 360 video frames in the display video are uniformly extracted, and each frame corresponds to one angle; if the angle corresponding to the current second operation instruction slides from 10 ° to 30 °, it can be determined that the pose parameter (angle) when the video frame group stops playing at this time is 30 °. Or, the pose parameter may be a serial number, and one serial number corresponds to one video frame in the video frame group, that is, it is only necessary to ensure that the pose parameter and the video frame are in a one-to-one correspondence relationship.
In addition, the second operation instruction may also be a zoom instruction, and since the second operation instruction does not change the position and pose parameters of the group of video frames, the position and pose parameters corresponding to the second operation instruction are actually the position and pose parameters of the current video frame, and at this time, the current video frame is zoomed, so that the user can conveniently view the details of the current video frame. Optionally, when the display video is played in step 101, the user may also directly input a zoom instruction (the first operation instruction and the second operation instruction are the same instruction), and at this time, the current video frame is directly zoomed; namely, the user can complete two processes of selecting the current video frame and zooming the current video frame through a zooming instruction, and the user can be used as a process that the user cannot perceive the background to generate the video frame group.
Step 103: and responding to the second operation instruction, and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture.
In the implementation of the present invention, after the video frame corresponding to the second operation instruction is determined, the video frames in the video frame group can be continuously played according to the playing time sequence corresponding to the first gesture.
Optionally, the pose parameter of the video frame corresponding to the end of the continuous playing is determined, and the pose parameter can be used as the pose parameter of the current video frame group after responding to the second operation instruction, and the video frame corresponding to the pose parameter is referred to as a target video frame in this embodiment. Specifically, when the display video is collected, corresponding pose parameters can be set for each frame; or when the first video frames (or the second video frames) are extracted from the display video, setting corresponding pose parameters for each first video frame (or each second video frame); for example, if 36 video frames are extracted from a one-week (360 °) presentation video, the pose parameters of all the video frames may be 0 °, 10 °, … …, and 350 ° in sequence. After the target video frame is determined, the details of the target product can be displayed to the user in a mode of displaying the target video frame.
Specifically, after generating the video frame group, the method may further include: receiving a third operation instruction, wherein the third operation instruction comprises: a second gesture for controlling one of the video frames in the group of video frames to zoom in or out; and responding to the third operation instruction, and enlarging or reducing one video frame according to the second gesture.
The video frame controlled by the third operation instruction may be one frame of the video frames continuously played in step 103, or may be the last frame, that is, the target video frame. The target video frame is an image, and a user can check more detailed characteristics of the product through a third operation instruction, wherein the third operation instruction can be a zooming instruction, a rotating instruction and the like; for example, product details may be viewed by enlarging the target video frame. Different video frames can be positioned by inputting different second operation instructions, so that a multi-angle product can be checked, and a user can conveniently check the part of the product concerned by the user; more detailed product characteristics can be checked by inputting a third operation instruction. Meanwhile, when the video frame in the display video is extracted, the video frame can be subjected to debouncing processing by using other frames adjacent to the video frame, so that the definition or the picture quality of the video frame is improved.
According to the method for displaying the product, provided by the embodiment of the invention, after the current video frame is determined when the user plays temporarily, the video frame in the displayed video is extracted and the video frame group of the target product is generated, the user selects the concerned position by operating the video frame group, and the video frame image at the concerned position is displayed so as to facilitate the user to check the product details. According to the method, 3D rendering is not needed, the cost for obtaining the display video is low, the video frame group of each product can be conveniently and quickly generated, and the individual characteristics of each product can be displayed; the product is displayed to the user in a mode of displaying the video frame, so that the user can conveniently check the product details, and the display effect is more precise and accurate.
In the above embodiment, the display video includes a first display video captured along a first preset path and a second display video captured along a second preset path, where the first preset path and the second preset path both refer to a relative movement path between the target product and the image capture device when the display video is captured, and the first preset path and the second preset path are different. Meanwhile, the first display video is marked with key frames. At this time, referring to fig. 2, the step 101 "generating a video frame group" specifically includes:
step 1011: and taking the key frames in the first display video as partial or all effective video frames extracted from the first display video, and adding the determined effective video frames to the video frame group.
In the embodiment of the present invention, the valid video frame may include a first video frame and a second video frame. The key frame may specifically be a frame capable of highlighting the personality of the target product, for example, a scratch of the used vehicle may be highlighted. One or more of the first display videos can be marked as a key frame when being collected, and then the key frame can be used as an extracted effective video frame, and if the number of the key frames is small, other frames can be additionally extracted as effective video frames. If the frames which are not marked in the first display video are the key frames, marking the key frames in the first display video when extracting the effective video frames in the first display video, and taking the key frames as the effective video frames; and if the current video frame selected by the user is also a key frame, taking the current video frame as the key frame to execute subsequent processing. And then, adding the extracted effective video frames into the video frame group, and then generating the video frame group by using the current video frame and the effective video frames (including the key frames) in the video frame group. By means of marking the key frames, the individual characteristics of the product can be more prominent when the video frame group of the product is displayed.
On the basis of the foregoing embodiment, the video frame groups of the product may be generated based on multiple groups of display videos, and specifically, referring to fig. 2, the step 101 "generating a video frame group" further includes:
step 1012: and acquiring a second display video of the target product, wherein the second display video is a video shot along a second preset path and comprises at least one key frame in the first display video.
Step 1013: a plurality of active video frames are extracted from the second presentation video, and the extracted active video frames are added to the video frame group.
In the embodiment of the invention, a plurality of display videos of the target product can be collected, and an intersection, namely a key frame, exists between two display videos. Specifically, when the first display video and the second display video are captured, a certain frame may be set as a common key frame. For example, referring to fig. 3a, when a display video of a used vehicle is acquired by using a camera, a first display video is acquired by rotating the camera for one circle in a horizontal direction, and a motion track of the camera passes through a point a; and then taking the point A as a starting point (or passing the point A), and acquiring a second display video by rotating for half a cycle in the vertical direction, namely, a frame of image corresponding to the point A is a common key frame of the two. In fig. 3a, arrow lines in the left-right direction indicate a first preset trajectory, and arrow lines in the up-down direction indicate a second preset trajectory. Or after the first display video and the second display video are obtained, determining the same frame of image in the two display videos based on an image recognition technology, and then using the same frame of image as a key frame. After the key frame is determined, the effective video frames in the second display video can be extracted, and the relative pose between the effective video frames in the two display videos is determined by taking the key frame as a reference.
In the embodiment of the invention, the video frame group can be displayed in multiple directions through a plurality of display videos, so that more display angles are further provided for users, and the display effect is further improved; and the relative pose among the effective video frames in the multiple display videos can be determined by taking the key frames as the reference, and the video frames concerned by the user can be accurately determined when the video frame group is displayed.
On the basis of the above embodiments, the method provided by the embodiment of the present invention can also display the video frames in the video frame group in multiple directions. Specifically, after generating the video frame group, the method further includes: determining a three-dimensional position parameter corresponding to each video frame in the video frame group, wherein the three-dimensional position parameter is a three-dimensional coordinate parameter or a three-dimensional polar coordinate parameter; and generating a plurality of preset playing paths according to the three-dimensional position parameters of all the video frames, wherein each preset playing path corresponds to the three-dimensional position parameters of the plurality of video frames, and each preset playing path is a two-dimensional path in a plane.
In the embodiment of the invention, the three-dimensional position parameter of each video frame in the display video can be determined when the display video is acquired. Specifically, the three-dimensional position parameter of the video frame is used for representing the viewing position of the display target product; for example, when the image capturing device captures a display video of a target product, the target product may be used as an origin of three-dimensional coordinates, and a position where the image capturing device captures a video frame may be used as a three-dimensional position parameter of the video frame.
After the video frame group is determined, a plurality of three-dimensional position parameters can be determined, and the three-dimensional position parameters can be determined to be coplanar by using a mathematical principle, namely, the three-dimensional position parameters are positioned in the same plane, so that a preset playing path is generated. The three-dimensional position parameters corresponding to the preset playing path are coplanar, so that the preset playing path is a two-dimensional path. After receiving a second operation instruction input by the user, the first gesture corresponding to the second operation instruction is also a two-dimensional gesture, so that the preset playing path corresponding to the first gesture can be conveniently determined. In the embodiment of the present invention, the preset playing path may include a horizontal playing path, a vertical playing path, and an inclined playing path, so as to adapt to different playing requirements of users.
Specifically, in step 103 "responding to the second operation instruction", the step specifically includes: determining a matched preset playing path according to the operation direction of the first gesture in the second operation instruction, and determining a video frame in a video frame group corresponding to the matched preset playing path; and then continuously playing the video frames in the video frame group corresponding to the matched preset playing path according to the playing sequence indicated by the first gesture.
For example, the operation direction of the first gesture is a vertical direction, and at this time, a preset playing path corresponding to the first gesture can be selected from preset playing paths in the vertical direction, so as to play the video frames in the corresponding video frame group along the vertical direction. By determining the preset playing path corresponding to the first gesture, the user can control the display of the video frame group in the horizontal direction, the vertical direction and the inclined direction, that is, when the user needs to operate the video frame group, the user can operate the video frame group in the transverse direction or the longitudinal direction, and can also operate the video frame group along paths in other directions (such as inclined operation and the like), so that the user can conveniently and quickly locate the concerned position. In the embodiment of the invention, the preset playing path is utilized, so that the three-dimensional model of the target product can be simulated more truly, and a user can conveniently and quickly locate the concerned position.
On the basis of the above-described embodiment, the video frame is also subjected to the debounce processing when the video frame group is generated, so as to reduce the shake of the image pickup apparatus during movement. Specifically, as shown in fig. 3b, when the image pickup apparatus (the camera in fig. 3 b) picks up an image of a target product (illustrated as a vehicle in fig. 3 b), an indicator is set at a position away from the image pickup apparatus, and the indicator is denoted by [ ] in fig. 3 b. Meanwhile, fig. 3b illustrates that the image capturing apparatus is moving and the target product is stationary, so that a plurality of indicators need to be provided. The pointer is located at a position away from the image pickup apparatus, and as shown in fig. 3b, when the image pickup apparatus picks up the target product R, the collected pointer S is located at a position away from the image pickup apparatus. That is, in the world coordinate system, the distance between the indicator and the camera device that captures the first display video is not less than a preset distance value, and the preset distance value is greater than the distance between the camera device and the target product, so that the indicator is farther from the camera device. The indicator may be a planar figure, such as a cross, an X-shape, a circle, etc., and only needs to be identified in the following. The world coordinate system is used to describe a positional relationship between the image pickup apparatus and an object in the real world.
In an embodiment of the present invention, the process of generating the video frame group specifically includes:
step A1: an original video frame is extracted from the presentation video and a reference video frame is determined, the reference video frame being a video frame adjacent to the original video frame in the presentation video frame.
In the embodiment of the present invention, when extracting an original video frame as an effective video frame (including a first video frame and a second video frame), an adjacent reference video frame is also extracted. For example, if the original video frame is the 10 th frame of the display video, the reference video frame may be the 9 th frame or the 11 th frame as long as the two are adjacent.
Step A2: determining a first coordinate value of the indicator in the original video frame and a second coordinate value of the indicator in the reference video frame, and determining a shake parameter of the image pickup apparatus according to a variation value between the first coordinate value and the second coordinate value.
In the embodiment of the present invention, because the indicator is preset (for example, preset shape, color, etc.), after the original video frame is acquired, the position of the indicator in the original video frame can be determined, and further, a coordinate value of the indicator in the original video frame, that is, a first coordinate value is determined; likewise, the coordinate value of the pointer in the reference video frame, i.e. the second coordinate value, may also be determined. In the embodiment of the invention, the variation value between the first coordinate value and the second coordinate value is used for representing the shake parameter of the image pickup device.
Specifically, the principle schematic diagram of imaging of the image capturing apparatus is shown in fig. 4a, a point O' is an optical center of the image capturing apparatus, and n is1The central normal of the imaging surface of the camera device is shown, a point O1 is the central point of the imaging surface, R represents the position of a target product, and S represents the position of a remote indicator; the target product R and the indicator S have corresponding image forming points on the image forming surface, and for the sake of convenience, the image forming points of both are the point M1 in this embodiment. Wherein the distance between the target product R and the imaging surface is l1The distance between the indicator S and the imaging surface is l2The distance between the center point O1 and the imaging point M1 is r1. Wherein each position may be represented in, or may be scaled to, a world coordinate system.
When the image pickup apparatus moves, the normal n to the original imaging plane, as shown in fig. 4b1The positions of the target product R and the indicator S are unchanged, and the normal line of the moving camera equipment is n2The center point of the image plane is O2, the image point of the target product R on the image plane is M2, and the image point of the indicator S is M3. Meanwhile, assuming that the pointer S is moved to S 'by the same movement amount of the image pickup apparatus, the imaging point corresponding to S' is M1. As can be seen from fig. 4b, when the image pickup apparatus moves, the imaging position of the target product R changes from M1 to M2, the imaging position changes greatly, and the imaging position of the indicator S changes from M1 to M3, the imaging position changes less; if the distance l of the indicator S from the camera device2Sufficiently large, the point M3 may be considered to substantially coincide with the point M1, i.e. the imaging position of the pointer S is unchanged.
In the embodiment of the invention, when the camera shooting device shoots normally (does not shake), the position of the camera shooting device changes little because the time interval between two adjacent frames (an original video frame and a reference video frame) is short, and the position of the indicator S in the camera shooting device does not change because the indicator S is far away from the camera shooting device; if the image pickup apparatus shakes, the position of the image pickup apparatus changes greatly, which results in a change in the imaging position of the pointing object S on the image pickup apparatus, and this change (i.e., the change value between the first coordinate value and the second coordinate value) indicates the shake of the image pickup apparatus between two frames. For example, the coordinates of the pointing object are shifted a mm to the left, and at this time, it can be simply considered that the image pickup apparatus shakes b mm to the right, and the value of b is specifically determined according to the intrinsic parameters (such as focal length and the like) of the image pickup apparatus and a.
Step A3: and carrying out de-jittering processing on the original video frame according to jitter parameters of the camera equipment, and adding the video frame generated after de-jittering processing into a video frame group, wherein the video frame generated after de-jittering processing comprises a current video frame, a first video frame or a second video frame.
In the embodiment of the invention, after the jitter parameter of the camera equipment is determined, the coordinate value of the pixel point in the original video frame can be adjusted based on the jitter parameter, so that image jitter removal is realized. The image processing method includes the steps of extracting a video frame, and extracting a first operation instruction corresponding to the video frame, wherein the extracted video frame can be subjected to de-jittering processing so as to determine a first video frame or a second video, and the current video frame corresponding to the first operation instruction can also be subjected to de-jittering processing so as to add the de-jittered current video frame to a video frame group. After determining the shake parameters of the image capturing apparatus, the algorithm for image shake removal used may adopt existing mature technology, which is not limited in this embodiment. In the embodiment of the invention, the video frame group is generated by obtaining the video frame after the shake removal, so that the shake of the camera equipment during the video acquisition and display can be effectively reduced, and the video frame in the video frame group is clearer.
The method flow of the product display is described in detail below by way of an example.
In the embodiment of the invention, when a seller needs to sell a second-hand vehicle, the seller shoots a plurality of display videos with different paths based on local equipment or equipment provided by a platform and uploads the display videos to the server of the online shopping mall, and the server generates a corresponding video frame group based on the display videos for a buyer to surf the internet for viewing. Referring to fig. 5, the method includes:
step 501: the seller shoots a plurality of display videos of the used vehicle and uploads all the display videos to the server.
In the embodiment of the invention, the number of the display videos is not limited, and only the fact that each display video at least has a common key frame with one other display video is required to be ensured. For convenience of illustration, in the embodiment of the present invention, two presentation videos are taken as an example, that is, a first presentation video and a second presentation video, and a common key frame exists between the two presentation videos, such as a key frame captured at point a in fig. 3 a.
Step 502: and the server extracts a plurality of effective video frames in the first display video and a plurality of effective video frames in the second display video by taking the key frames as a reference.
Step 503: and all the effective video frames form a video frame group, and the pose parameter of each video frame in the video frame group is determined.
Step 504: a set of video frames for the used vehicle is generated based on the set of video frames and may be displayed in a web page or APP.
Step 505: the buyer inputs an operation instruction for operating the video frame group.
The buyer can view the video frame group in the web page or APP and input a corresponding operation instruction (i.e. the second operation instruction) by sliding left and right or sliding up and down, so as to view different angles of the used vehicle.
Step 506: and determining the pose parameter of the current positioning according to the operation instruction, and determining a corresponding target video frame.
After the buyer inputs the first operation instruction, the video frame group can move or rotate, and the corresponding pose after the movement or rotation is stopped can be determined, so that the corresponding video frame, namely the target video frame, is determined. The target video frame may be a first video frame or a second video frame, depending on the first operation instruction.
Step 507: and displaying the target video frame, receiving another operation instruction input by the buyer, and performing corresponding processing on the target video frame based on the other operation instruction.
In the embodiment of the invention, after the target video frame is displayed, the buyer can amplify the target video frame by inputting another operation instruction (i.e. the third operation instruction) so as to check the detail characteristics of the used vehicle, such as whether scratches exist or not.
According to the method for displaying the product, provided by the embodiment of the invention, after the current video frame is determined when the user plays temporarily, the video frame in the displayed video is extracted and the video frame group of the target product is generated, the user selects the concerned position by operating the video frame group, and the video frame image at the concerned position is displayed so as to facilitate the user to check the product details. According to the method, 3D rendering is not needed, the cost for obtaining the display video is low, the video frame group of each product can be conveniently and quickly generated, and the individual characteristics of each product can be displayed; the product is displayed to the user in a mode of displaying the video frame, so that the user can conveniently check the product details, and the display effect is more precise and accurate. The video frame group can be displayed in multiple directions through the multiple display videos, so that more display angles are further provided for users, and the display effect is further improved; and the relative pose between the video frames in different display videos can be determined by taking the key frame as a reference, and the corresponding video frame can be accurately extracted when the video frame group is displayed. By utilizing the preset playing path, the three-dimensional model of the target product can be simulated more truly, and a user can conveniently and quickly locate the concerned position.
The above describes in detail the process flow of the method for displaying a product, which can also be implemented by a corresponding device, the structure and function of which are described in detail below.
Based on the same inventive concept, an embodiment of the present invention provides an apparatus for displaying a product, as shown in fig. 6, including:
a generating module 61, configured to, when a display video of a target product is played, pause playing of the display video if a first operation instruction for controlling the display video to pause playing is received, and generate a video frame group, where the video frame group includes: the video playing method comprises the steps that a current video frame corresponding to a playing pause action, at least one first video frame played before the playing pause action and at least one second video frame played after the playing pause action are paused;
a first operation module 62, configured to receive a second operation instruction, where the second operation instruction includes: a first gesture for controlling continuous playing of video frames in the group of video frames, the first gesture further being for indicating a playing order of the video frames in the group of video frames;
and the display module 63 is configured to respond to the second operation instruction, and continuously play the video frames in the video frame group according to the play order indicated by the first gesture.
On the basis of the above embodiment, referring to fig. 7, the apparatus further includes a second operation module 64;
after the generating module 61 generates the video frame group, the second operating module 64 is configured to: receiving a third operation instruction, wherein the third operation instruction comprises: a second gesture for controlling one of the group of video frames to zoom in or out; and responding to the third operation instruction, and enlarging or reducing the one video frame according to the second gesture.
On the basis of the above embodiment, referring to fig. 8, the apparatus further includes: an acquisition module 65;
before the generation module 61 plays the display video of the target product, the obtaining module 65 is configured to:
when the target product moves along a first shooting path, a camera device at a fixed position collects the display video of the target product; or
Capturing, by a camera device moving along a second shooting path, the display video of the target product in a fixed position.
On the basis of the above embodiment, the video frame of the display video includes a pixel point of an indicator, and in a world coordinate system, a distance between the indicator and a camera device that collects the display video is not less than a preset distance value, and the preset distance value is greater than a distance between the camera device and the target product;
the process of generating the video frame group by the generating module 61 includes:
extracting an original video frame from the display video and determining a reference video frame, wherein the reference video frame is a video frame adjacent to the original video frame in the display video frame;
determining a first coordinate value of the indicator in the original video frame and a second coordinate value of the indicator in the reference video frame, and determining a shake parameter of the camera according to a variation value between the first coordinate value and the second coordinate value;
and carrying out debounce processing on the original video frame according to the jitter parameters of the camera equipment, and adding the video frame generated after the debounce processing into the video frame group, wherein the video frame generated after the debounce processing comprises the current video frame, the first video frame or the second video frame.
According to the device for displaying the product, provided by the embodiment of the invention, after the current video frame is determined when the user plays temporarily, the video frame in the displayed video is extracted and the video frame group of the target product is generated, the user selects the concerned position by operating the video frame group, and the video frame image at the concerned position is displayed so as to facilitate the user to check the product details. According to the method, 3D rendering is not needed, the cost for obtaining the display video is low, the video frame group of each product can be conveniently and quickly generated, and the individual characteristics of each product can be displayed; the product is displayed to the user in a mode of displaying the video frame, so that the user can conveniently check the product details, and the display effect is more precise and accurate. The video frame group can be displayed in multiple directions through the multiple display videos, so that more display angles are further provided for users, and the display effect is further improved; and the relative pose between the video frames in different display videos can be determined by taking the key frame as a reference, and the corresponding video frame can be accurately extracted when the video frame group is displayed. By utilizing the preset playing path, the three-dimensional model of the target product can be simulated more truly, and a user can conveniently and quickly locate the concerned position.
Embodiments of the present invention also provide a storage medium, where the storage medium stores computer-executable instructions, which include a program for executing the method for displaying a product described above, and the computer-executable instructions may execute the method in any of the above method embodiments.
The storage medium may be any available medium or data storage device that can be accessed by a computer, including but not limited to magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, nonvolatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
Fig. 9 shows a block diagram of an electronic device according to another embodiment of the present invention. The electronic device 1100 may be a host server with computing capabilities, a personal computer PC, or a portable computer or terminal that is portable, or the like. The specific embodiment of the present invention does not limit the specific implementation of the electronic device.
The electronic device 1100 includes at least one processor (processor)1110, a Communications Interface 1120, a memory 1130, and a bus 1140. The processor 1110, the communication interface 1120, and the memory 1130 communicate with each other via the bus 1140.
The communication interface 1120 is used for communicating with network elements including, for example, virtual machine management centers, shared storage, etc.
Processor 1110 is configured to execute programs. Processor 1110 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
The memory 1130 is used for executable instructions. The memory 1130 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1130 may also be a memory array. The storage 1130 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. The instructions stored by the memory 1130 are executable by the processor 1110 to enable the processor 1110 to perform a method of displaying a product in any of the method embodiments described above.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method of displaying a product, comprising:
when a display video of a target product is played, if a first operation instruction for controlling the display video to pause playing is received, pausing the playing of the display video and generating a video frame group, wherein the video frame group comprises: the video playing method comprises the steps that a current video frame corresponding to a playing pause action, at least one first video frame played before the playing pause action and at least one second video frame played after the playing pause action are paused;
receiving a second operation instruction, wherein the second operation instruction comprises: a first gesture for controlling continuous playing of video frames in the group of video frames, the first gesture further being for indicating a playing order of the video frames in the group of video frames;
responding to the second operation instruction, and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture;
the video frame of the display video comprises pixel points of an indicator, in a world coordinate system, the distance between the indicator and a camera device for collecting the display video is not less than a preset distance value, and the preset distance value is greater than the distance between the camera device and the target product;
the generating the set of video frames comprises:
extracting an original video frame from the display video and determining a reference video frame, wherein the reference video frame is a video frame adjacent to the original video frame in the display video;
determining a first coordinate value of the indicator in the original video frame and a second coordinate value of the indicator in the reference video frame, and determining a shake parameter of the camera according to a variation value between the first coordinate value and the second coordinate value;
and carrying out debounce processing on the original video frame according to the jitter parameters of the camera equipment, and adding the video frame generated after the debounce processing into the video frame group, wherein the video frame generated after the debounce processing comprises the current video frame, the first video frame or the second video frame.
2. The method of claim 1, wherein after generating the set of video frames, the method further comprises:
receiving a third operation instruction, wherein the third operation instruction comprises: a second gesture for controlling one of the group of video frames to zoom in or out;
and responding to the third operation instruction, and enlarging or reducing the one video frame according to the second gesture.
3. The method of claim 1, wherein prior to playing the display video of the target product, the method further comprises:
when the target product moves along a first shooting path, a camera device at a fixed position collects the display video of the target product; or
Capturing, by a camera device moving along a second shooting path, the display video of the target product in a fixed position.
4. The method of claim 1, further comprising, after the generating the set of video frames: determining a three-dimensional position parameter corresponding to each video frame in the video frame group, wherein the three-dimensional position parameter is a three-dimensional coordinate parameter or a three-dimensional polar coordinate parameter;
generating a plurality of preset playing paths according to the three-dimensional position parameters of all the video frames, wherein each preset playing path corresponds to the three-dimensional position parameters of the plurality of video frames, and each preset playing path is a two-dimensional path in a plane;
the responding to the second operation instruction comprises:
determining a matched preset playing path according to the operation direction of the first gesture in the second operation instruction, and determining a video frame in a video frame group corresponding to the matched preset playing path; and then continuously playing the video frames in the video frame group corresponding to the matched preset playing path according to the playing sequence indicated by the first gesture.
5. An apparatus for displaying a product, comprising:
a generating module, configured to, when a display video of a target product is played, pause playing of the display video if a first operation instruction for controlling the display video to pause playing is received, and generate a video frame group, where the video frame group includes: the video playing method comprises the steps that a current video frame corresponding to a playing pause action, at least one first video frame played before the playing pause action and at least one second video frame played after the playing pause action are paused;
the first operation module is used for receiving a second operation instruction, and the second operation instruction comprises: a first gesture for controlling continuous playing of video frames in the group of video frames, the first gesture further being for indicating a playing order of the video frames in the group of video frames;
the display module is used for responding to the second operation instruction and continuously playing the video frames in the video frame group according to the playing sequence indicated by the first gesture;
the video frame of the display video comprises pixel points of an indicator, in a world coordinate system, the distance between the indicator and a camera device for collecting the display video is not less than a preset distance value, and the preset distance value is greater than the distance between the camera device and the target product;
the process of generating the set of video frames by the generation module comprises:
extracting an original video frame from the display video and determining a reference video frame, wherein the reference video frame is a video frame adjacent to the original video frame in the display video;
determining a first coordinate value of the indicator in the original video frame and a second coordinate value of the indicator in the reference video frame, and determining a shake parameter of the camera according to a variation value between the first coordinate value and the second coordinate value;
and carrying out debounce processing on the original video frame according to the jitter parameters of the camera equipment, and adding the video frame generated after the debounce processing into the video frame group, wherein the video frame generated after the debounce processing comprises the current video frame, the first video frame or the second video frame.
6. The apparatus of claim 5, further comprising a second operational module;
after the generating module generates the group of video frames, the second operating module is to: receiving a third operation instruction, wherein the third operation instruction comprises: a second gesture for controlling one of the group of video frames to zoom in or out; and responding to the third operation instruction, and enlarging or reducing the one video frame according to the second gesture.
7. The apparatus of claim 6, further comprising: an acquisition module;
before the generation module plays the display video of the target product, the acquisition module is configured to:
when the target product moves along a first shooting path, a camera device at a fixed position collects the display video of the target product; or
Capturing, by a camera device moving along a second shooting path, the display video of the target product in a fixed position.
8. A storage medium having stored thereon computer-executable instructions for performing the method of displaying a product of any of claims 1-4.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of displaying a product of any of claims 1-4.
CN201811518879.3A 2018-12-12 2018-12-12 Method and device for displaying product, storage medium and electronic equipment Active CN109587572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811518879.3A CN109587572B (en) 2018-12-12 2018-12-12 Method and device for displaying product, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811518879.3A CN109587572B (en) 2018-12-12 2018-12-12 Method and device for displaying product, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109587572A CN109587572A (en) 2019-04-05
CN109587572B true CN109587572B (en) 2021-03-23

Family

ID=65928229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811518879.3A Active CN109587572B (en) 2018-12-12 2018-12-12 Method and device for displaying product, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109587572B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290385B (en) * 2019-06-11 2021-12-14 观博云标(北京)文化科技有限公司 High-spatial-temporal-resolution skynet video processing method and device
CN111541907B (en) * 2020-04-23 2023-09-22 腾讯科技(深圳)有限公司 Article display method, apparatus, device and storage medium
CN114141181A (en) * 2021-12-03 2022-03-04 圣风多媒体科技(上海)有限公司 Product display method and system based on 5G Internet of things

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941998A (en) * 2014-03-25 2014-07-23 惠州Tcl移动通信有限公司 Method and system for controlling music playing of mobile terminal through screen tapping or gesture recognizing
CN104735544A (en) * 2015-03-31 2015-06-24 上海摩软通讯技术有限公司 Video guidance method for mobile terminal
CN105357585A (en) * 2015-08-29 2016-02-24 华为技术有限公司 Method and device for playing video content at any position and time
CN106485779A (en) * 2016-03-22 2017-03-08 智合新天(北京)传媒广告股份有限公司 A kind of 3D virtual interacting display platform and the method for showing 3D animation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2544208A (en) * 2014-06-18 2017-05-10 Google Inc Methods, systems and media for controlling playback of video using a touchscreen

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941998A (en) * 2014-03-25 2014-07-23 惠州Tcl移动通信有限公司 Method and system for controlling music playing of mobile terminal through screen tapping or gesture recognizing
CN104735544A (en) * 2015-03-31 2015-06-24 上海摩软通讯技术有限公司 Video guidance method for mobile terminal
CN105357585A (en) * 2015-08-29 2016-02-24 华为技术有限公司 Method and device for playing video content at any position and time
CN106485779A (en) * 2016-03-22 2017-03-08 智合新天(北京)传媒广告股份有限公司 A kind of 3D virtual interacting display platform and the method for showing 3D animation

Also Published As

Publication number Publication date
CN109587572A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
CN109587572B (en) Method and device for displaying product, storage medium and electronic equipment
WO2018059034A1 (en) Method and device for playing 360-degree video
CN107404615B (en) Image recording method and electronic equipment
CN111737518A (en) Image display method and device based on three-dimensional scene model and electronic equipment
CN112543343B (en) Live broadcast picture processing method and device based on live broadcast with wheat
EP3093822B1 (en) Displaying a target object imaged in a moving picture
WO2017173933A1 (en) Object image display method, device, and system
EP2946274B1 (en) Methods and systems for creating swivel views from a handheld device
US11373329B2 (en) Method of generating 3-dimensional model data
CN113223130A (en) Path roaming method, terminal equipment and computer storage medium
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
CN111429518A (en) Labeling method, labeling device, computing equipment and storage medium
JP7467780B2 (en) Image processing method, apparatus, device and medium
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN114581611A (en) Virtual scene construction method and device
CN113225480A (en) Image acquisition method, image acquisition device, electronic equipment and medium
JP2013097773A (en) Information processing apparatus, information processing method, and program
CN114095780A (en) Panoramic video editing method, device, storage medium and equipment
CN111292234B (en) Panoramic image generation method and device
CN117058343A (en) VR (virtual reality) viewing method and system based on NERF (network-based radio frequency identification), electronic equipment and storage medium
CN109931923B (en) Navigation guidance diagram generation method and device
CN115589532A (en) Anti-shake processing method and device, electronic equipment and readable storage medium
CN114900742A (en) Scene rotation transition method and system based on video plug flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant