CN108961314B - Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium - Google Patents

Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN108961314B
CN108961314B CN201810699053.5A CN201810699053A CN108961314B CN 108961314 B CN108961314 B CN 108961314B CN 201810699053 A CN201810699053 A CN 201810699053A CN 108961314 B CN108961314 B CN 108961314B
Authority
CN
China
Prior art keywords
distance coefficient
target object
moving image
coefficient
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810699053.5A
Other languages
Chinese (zh)
Other versions
CN108961314A (en
Inventor
李旭刚
冯宇飞
柳杨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810699053.5A priority Critical patent/CN108961314B/en
Publication of CN108961314A publication Critical patent/CN108961314A/en
Priority to PCT/CN2019/073077 priority patent/WO2020001016A1/en
Application granted granted Critical
Publication of CN108961314B publication Critical patent/CN108961314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a moving image generation method and device, an electronic device and a computer-readable storage medium. The moving image generation method comprises the following steps: detecting a target object using an image sensor; identifying an outer frame of the target object; calculating a distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame; generating a moving image of the target object using the distance coefficient. By adopting the technical scheme, the jump phenomenon in the prior art when the motion track of the target object is determined by using the image area is solved, and the motion image is smoother.

Description

Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for generating a moving image, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
The current mobile terminal can identify and collect the actions of the user and form a moving image.
Disclosure of Invention
However, the inventor has found that, at present, when recording the motion trajectory of an object, a method of calculating the motion trajectory of a certain feature of the object is generally adopted, for example, the motion speed of the object from far to near or from near to far needs to be calculated, then, firstly, a contour range of the object is identified, generally, the contour range can be represented by a polygon or a circle, the distance of the object can be represented by an area of the polygon, the larger the area is, the closer the object is, the smaller the area is, the farther the object is, but when recording the motion of the object, the motion trajectory or the speed of the object is caused to jump, so that the image looks unsmooth.
Therefore, if a method for generating a moving image is provided, the moving track and the moving speed of an object in the image are not transited, and the shooting effect of the image and the user experience can be greatly improved.
In view of this, the embodiments of the present disclosure provide a motion image generation method for smoothing a motion trajectory or a motion speed of an object in an image, so that the image looks smoother.
In a first aspect, an embodiment of the present disclosure provides a moving image generation method, including: detecting a target object using an image sensor; identifying an outer frame of the target object; calculating a distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame; generating a moving image of the target object using the distance coefficient.
Optionally, the step of calculating the distance coefficient includes: and calculating a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame, and calculating a distance coefficient by using the current distance coefficient and the historical distance coefficient.
Optionally, the historical distance coefficient is a distance coefficient between a target object in a previous image frame and the image sensor; or the historical distance coefficient is a distance coefficient between the target object and the image sensor at the last moment; or the historical distance coefficient is an average value of a plurality of distance coefficients between the target object and the image sensor at a plurality of moments or in a plurality of frames of images before the current moment.
Optionally, the calculating a distance coefficient by using the current distance coefficient and the historical distance coefficient includes: multiplying the current distance coefficient by a first weight coefficient to obtain a first weight distance coefficient; multiplying the historical distance coefficient by a second weight coefficient to obtain a second weight distance coefficient; and adding the first weight distance coefficient and the second weight distance coefficient to obtain a distance coefficient.
Optionally, the generating a moving image of the target object by using the distance coefficient includes: generating a moving image of the target object using a plurality of distance coefficients.
Optionally, the target object is a palm, and the outer frame is a minimum rectangle covering the palm.
Optionally, the length of the long side of the rectangle is L, the length of the wide side of the rectangle is W, and the distance coefficient is:
f(x)=Axa
where x is L + W, a is a real number greater than 0, 0< a < 1.
Optionally, calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient includes:
f′(x)=αf(xn)+βf(xn-1)
wherein alpha is>0,β>0, alpha + beta is 1, and n is more than or equal to 2; f' (x) represents a distance coefficient, f (x)n) Representing the current distance coefficient, f (x)n-1) Indicating the distance coefficient at the last time instant.
Optionally, the detecting a target object by using an image sensor includes: acquiring color information of an image and position information of the color information using an image sensor; comparing the color information with preset palm color information; identifying first color information, wherein the error between the first color information and the preset palm color information is smaller than a first threshold value; and forming the outline of the palm by using the position information of the first color information.
In a second aspect, an embodiment of the present disclosure provides a moving image generation apparatus, including: a detection module for detecting a target object using an image sensor; the identification module is used for identifying the outer frame of the target object; the distance coefficient calculation module is used for calculating a distance coefficient between the target object and the image sensor by utilizing one or more side lengths of the outer frame; and the image generation module is used for generating a moving image of the target object by using the distance coefficient.
Optionally, the distance coefficient calculating module includes: the first distance coefficient calculation module is used for calculating a current distance coefficient between a target object and the image sensor by utilizing one or more side lengths of the outer frame; and the second distance coefficient calculation module is used for calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient.
Optionally, the historical distance coefficient is a distance coefficient between a target object in a previous image frame and the image sensor; or the historical distance coefficient is a distance coefficient between the target object and the image sensor at the last moment; or the historical distance coefficient is an average value of a plurality of distance coefficients between the target object and the image sensor at a plurality of moments or in a plurality of frames of images before the current moment.
Optionally, the second distance coefficient calculating module includes: the first weight distance coefficient calculation module is used for multiplying the current distance coefficient by the first weight coefficient to obtain a first weight distance coefficient; the second weight distance coefficient calculation module is used for multiplying the historical distance coefficient by a second weight coefficient to obtain a second weight distance coefficient; and the third distance coefficient calculation module is used for adding the first weight distance coefficient and the second weight distance coefficient to obtain a distance coefficient.
Optionally, the image generation module: for generating a moving image of the target object using a plurality of distance coefficients.
Optionally, the target object is a palm, and the outer frame is a minimum rectangle covering the palm.
Optionally, the length of the long side of the rectangle is L, the length of the wide side of the rectangle is W, and the distance coefficient is:
f(x)=Axa
where x is L + W, a is a real number greater than 0, 0< a < 1.
Optionally, calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient includes:
f′(x)=αf(xn)+βf(xn-1)
wherein alpha is>0,β>0, alpha + beta is 1, and n is more than or equal to 2; f' (x) represents a distance coefficient,f(xn) Representing the current distance coefficient, f (x)n-1) Indicating the distance coefficient at the last time instant.
Optionally, the detection module includes: the information acquisition module is used for acquiring color information of the image and position information of the color information by using the image sensor; the contrast module is used for comparing the color information with preset palm color information; the identification module is used for identifying first color information, and the error between the first color information and the preset palm color information is smaller than a first threshold value; and the outline forming module is used for forming the outline of the palm by utilizing the position information of the first color information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the moving image generation method of any of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute the moving image generation method in any one of the foregoing first aspects.
The embodiment of the disclosure provides a moving image generation method and device, electronic equipment and a computer readable storage medium. The moving image generation method comprises the following steps: detecting a target object using an image sensor; identifying an outer frame of the target object; calculating a distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame; generating a moving image of the target object using the distance coefficient. By adopting the technical scheme, the jump phenomenon in the prior art when the motion track of the target object is determined by using the image area is solved, and the motion image is smoother.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical aspects of the present disclosure, the present disclosure may be implemented in accordance with the following description, and the foregoing and other objects, features, and advantages of the present disclosure will be apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of a first embodiment of a motion image generation method provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of a second embodiment of a motion image generation method provided in the embodiment of the present disclosure;
fig. 3 is a flowchart of a third embodiment of a motion image generation method provided in the embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a first embodiment, a second embodiment and a third embodiment of a moving image generation apparatus provided in the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a computer-readable storage medium provided in accordance with an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a moving image generation terminal provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort fall within the scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, number and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of a moving image generation method provided in this embodiment of the present disclosure, where the moving image generation method provided in this embodiment may be executed by a moving image generation apparatus, and the moving image generation apparatus may be implemented as software, or implemented as a combination of software and hardware, and the moving image generation apparatus may be integrally disposed in a certain device in an image processing system, such as an image processing server or an image processing terminal device. The core idea of the embodiment is as follows: the length of the outer frame side of the target object is used to represent the distance between the target object and the image sensor, and this distance is used to generate a moving image of the target object.
As shown in fig. 1, the method comprises the steps of:
s101, detecting a target object by using an image sensor;
in this embodiment, the target object may be any object that can be recognized by the image sensor, such as a tree, an animal, a person, or a part of the whole object, such as a human face, a human hand, or the like.
Detecting the target object includes locating feature points of the target object and identifying the target object. The characteristic points are points which have vivid characteristics in the image and can effectively reflect the essential characteristics of the image and can identify target objects in the image. If the target object is a human face, key points of the human face need to be acquired, and if the target image is a house, key points of the house need to be acquired. The method for acquiring key points is described by taking a human face as an example, wherein a human face contour mainly comprises 5 parts of eyebrows, eyes, a nose, a mouth and cheeks, and sometimes also comprises pupils and nostrils, generally, the complete description of the human face contour is realized, the number of the key points is about 60, if only a basic structure is described, the detailed description of each part is not needed, or the description of the cheeks is not needed, the number of the key points can be correspondingly reduced, and if the pupil, the nostril or the characteristics of five sense organs needing more detail are needed to be described, the number of the key points can be increased. Extracting face key points on the image, namely searching corresponding position coordinates of each face contour key point in the face image, namely key point positioning, wherein the process needs to be carried out based on the corresponding characteristics of the key points, and after the image characteristics capable of clearly identifying the key points are obtained, searching and comparing are carried out in the image according to the characteristics, so that the positions of the key points are accurately positioned on the image. Since feature points occupy only a very small area (usually, the size of only a few to tens of pixels) in an image, the area occupied by features corresponding to the feature points on the image is also very limited and local, and there are two types of feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image features of the feature point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the key points used by the various implementation methods are different, and the method is suitable for different application scenes. Similarly, for other target objects, the same principles can be used to identify the target object.
In the embodiment, the position where the target object exists is found from the image acquired by the image sensor and the target object is segmented from the background, wherein the position where the target object exists can be located by using colors, and the target object is roughly matched by using the colors; and carrying out feature extraction and identification on the found and segmented target object image.
S102, identifying an outer frame of the target object;
after the target object is detected, a polygon, which is a polygon in this embodiment, may be defined outside the outer contour of the target object, and may be actually any shape such as a circle, but it is preferable that the shape is a shape in which the area is easily calculated or the side length or the circumference is easily calculated. One implementation of calculating the longest position and the widest position of the target object is to extract boundary feature points of the target object, calculate the difference between the X coordinates of two boundary feature points with the farthest X coordinate distance as the length of the rectangle width, and calculate the difference between the Y coordinates of two boundary feature points with the farthest Y coordinate distance as the length of the rectangle length. If the target object is a fist, the frame may be set to the smallest circle that covers the fist, so that the side length of the frame may be the radius or circumference of the circle.
S103, calculating a distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame;
and S104, generating a moving image of the target object by using the distance coefficient.
In this embodiment, a distance coefficient between the target object and the image sensor, which indicates how far and near the target object is from the image sensor, is calculated using one or more side lengths of the above-described outer frame. Specifically, when the outer frame is a rectangle, the distance coefficient between the target object and the image sensor may be calculated using a wide side or a long side of the rectangle, or the distance coefficient may be calculated using a sum of the wide side and the long side of the rectangle, and since a change in the side length is linear, the distance coefficient is also calculated linearly, and no jump occurs. The side length or the sum of the side lengths of the outer frame can directly represent the distance between the target object and the image sensor, and can also be used as a distance coefficient to participate in the calculation of the distance, namely, the distance and the side length of the outer frame form a certain functional relationship, which is specifically satisfied, a user can define or use a function preset in a system by user, and each functional relationship can present different motion image effects.
According to the technical scheme in the embodiment, the distance coefficient between the target object and the image sensor is calculated on the basis of the side length of the outer frame by identifying the outer frame of the target object. Because the side length of the outer frame is linear along with the change of distance, the motion trail of the object can be better reflected, and the jump caused by using the area of the outer frame to represent the distance and the short range in the prior art is avoided, so that the image jumps.
Fig. 2 is a flowchart of a second embodiment of a moving image generation method provided in the embodiment of the present disclosure, and as shown in fig. 2, the method may include the following steps:
s201, detecting a target object by using an image sensor;
s202, identifying an outer frame of the target object;
s203, calculating a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame, and calculating a distance coefficient by using the current distance coefficient and a historical distance coefficient;
s204, a moving image of the target object is generated using the distance coefficient.
In this embodiment, in order to make the motion of the image smoother, a history distance coefficient is added, and the history distance coefficient is added to the current distance coefficient to calculate the distance coefficient to be actually used. For this reason, the system needs to set a buffer to buffer at least one historical distance coefficient, that is, after the distance coefficient is calculated, the distance coefficient is immediately sent to the corresponding buffer for later distance coefficient calculation.
In one implementation, the historical distance coefficient is a distance coefficient between a target object and an image sensor in a previous image frame, and the calculation frequency of the distance is calculated once for each image frame; or the historical distance coefficient is a distance coefficient between the target object and the image sensor at the last time, the last time can be the last calculation time or the time customized by the user, such as last 1 second, and the time is set before the method is operated to tell the system which distance coefficients need to be stored; or the historical distance coefficient is an average value of a plurality of distance coefficients before the current time. The calculation of the average value can be absolute average of numerical values or weighted average value, and an implementation manner of the weighted average value is described herein, if there are 5 historical distance coefficients, the 5 historical distance coefficients can form a historical distance coefficient vector according to time sequence
Figure BDA0001713857960000091
Setting a smoothing matrix which is a vector of time weight coefficients
Figure BDA0001713857960000092
Then will be
Figure BDA0001713857960000093
And
Figure BDA0001713857960000094
and performing convolution calculation to obtain an average value of a plurality of distance coefficients, wherein in the average value, 5 historical distance coefficients have higher weight, the closer the historical distance coefficients are to the current time, and the computed historical distance coefficients have better smoothness and are closer to the real historical distance coefficients.
In one implementation, the calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient is specifically to multiply the current distance coefficient by a first weight coefficient α to obtain a first weight distance coefficient; multiplying the historical distance coefficient by a second weight coefficient beta to obtain a second weight distance coefficient; and adding the first weight distance coefficient and the second weight distance coefficient to obtain a distance coefficient, wherein alpha + beta is 1, alpha is greater than 0, and beta is greater than 0. The first weight coefficient and the second weight coefficient may be preset or may be self-defined, where the self-defined weight coefficient may be preset to obtain a coefficient combination, and the coefficient combinations may implement a predetermined motion effect, and may also be completely self-defined, a user may adjust the weight coefficient using one slider, when the slider is located at the leftmost end, the first weight coefficient is 1, the second weight coefficient is 0, when the slider is located at the center, the first weight coefficient and the second weight coefficient are both 0.5, when the slider is located at the rightmost end, the first weight coefficient is 0, the second weight coefficient is 1, the user may freely control the value of the weight coefficient through the slider, and may view a motion image generated through a standard image while adjusting, so as to conveniently view the effect brought by the weight coefficient.
After the distance coefficients are obtained, the distances from the target object to the image sensor at a plurality of times or in a plurality of image frames are calculated using the plurality of distance coefficients, and a continuous moving image of the target object is generated.
In this embodiment, the calculating the distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame includes: calculating a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame, and calculating a distance coefficient by using the current distance coefficient and a historical distance coefficient; in this embodiment, the weights occupied by the current distance coefficient and the historical distance coefficient when calculating the distance coefficient may be adjusted, or the historical distance coefficient calculation mode may be controlled to achieve different moving image effects.
Fig. 3 is a flowchart of a third embodiment of a moving image generation method provided in the embodiment of the present disclosure, and as shown in fig. 3, the method may include the following steps:
s301, detecting a palm using an image sensor;
s302, identifying a minimum rectangle covering the palm;
s303, calculating a distance coefficient between the palm and the image sensor by using one or more side lengths of the minimum rectangle;
and S304, generating a moving image of the target object by using the distance coefficient.
In this embodiment, the application scenario is defined as the user moving the palm back and forth in front of the camera relative to the camera.
When the palm is identified, the position of the palm can be located by using color features, the palm is segmented from the background, and feature extraction and identification are carried out on the found and segmented palm image. Specifically, color information of an image and position information of the color information are acquired by using an image sensor; comparing the color information with preset palm color information; identifying first color information, wherein the error between the first color information and the preset palm color information is smaller than a first threshold value; and forming the outline of the palm by using the position information of the first color information. Preferably, in order to avoid interference of ambient brightness to color information, image data of an RGB color space acquired by the image sensor may be mapped to an HSV color space, information in the HSV color space is used as contrast information, and preferably, a hue value in the HSV color space is used as color information, so that the hue information is minimally affected by brightness, and interference of brightness may be well filtered. The position of the palm is roughly determined by using the palm contour, and then feature point extraction is performed on the palm, and the method proposed by the feature point may use the method described in the first embodiment or any suitable feature point extraction method in the prior art, which is not limited herein.
After extracting the palm image feature points, acquiring the boundary feature points of the palm, calculating the difference of the X coordinates of the two boundary feature points with the farthest X coordinate distance as the length of the width of the rectangle, and calculating the difference of the Y coordinates of the two boundary feature points with the farthest Y coordinate distance as the length of the rectangle. The smallest rectangle that covers the palm is identified using the method described above.
Assuming that the length of the long side of the rectangle is L and the length of the wide side is W, the distance coefficient can be calculated using the following function:
f(x)=Axa
wherein x is L + W, a >0, 0< a < 1.
The function is a power function and the power exponent is less than 1, so its value becomes larger with the value of x, but the rate of change becomes smaller and smaller. For the image of the palm in motion, such an effect is achieved: the closer the palm is to the lens, the faster the moving speed is, and the farther the palm is from the lens, the slower the moving speed is. The value of a determines the change rate of x, and the value of a can be customized by a user, and the implementation manner of the slider in the second embodiment can also be used, which is not described herein again. It should be noted that the above function is only an example, and in practical applications, other suitable functions may be used instead of the above function, in one implementation, multiple functions and parameters corresponding to the functions may be preset in the system, and a user may select a corresponding function and adjust corresponding parameters to achieve different motion image effects
In one implementation, the distance coefficient is calculated using the current distance coefficient and the historical distance coefficient:
f′(x)=αf(xn)+βf(xn-1)
wherein alpha is>0,β>0, alpha + beta is 1, and n is more than or equal to 2; f' (x) represents a distance coefficient, f (x)n) Representing the current distance coefficient, f (x)n-1) To representDistance coefficient at last moment.
In this embodiment, the target object is a palm, and is applied to a scene in which a user moves the palm back and forth in front of the lens, so that different action effects can be brought to the palm action of the user. It should be noted that although the technical solution is described using the palm in this embodiment, it is understood that the target object may be any other object, and the moving image may be generated using the technical solution in this embodiment.
The moving image generation apparatus of one or more embodiments of the present disclosure will be described in detail below. Those skilled in the art will appreciate that these motion image generation devices can each be configured by the steps taught in this scheme using commercially available hardware components.
Fig. 4 is a schematic structural diagram of a first embodiment of an image cropping device according to the present disclosure, and as shown in fig. 4, the device includes: a detection module 41, an identification module 42, a distance coefficient calculation module 43 and an image generation module 44.
A detection module 41 for detecting a target object using an image sensor;
an identification module 42 for identifying the outer frame of the target object;
a distance coefficient calculation module 43, configured to calculate a distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame;
an image generation module 44, configured to generate a moving image of the target object using the distance coefficient.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
In an embodiment of the moving image generating apparatus provided in the embodiment of the present disclosure, on the basis of the embodiment shown in fig. 4, the module performs the following steps:
a detection module 41 for detecting a target object using an image sensor;
an identification module 42 for identifying the outer frame of the target object;
a distance coefficient calculation module 43, which calculates a current distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame, and calculates a distance coefficient by using the current distance coefficient and the historical distance coefficient;
an image generation module 44, configured to generate a moving image of the target object using the distance coefficient.
The distance coefficient calculation module 43 includes:
the first distance coefficient calculation module is used for calculating a current distance coefficient between the target object and the image sensor by utilizing one or more side lengths of the outer frame;
and the second distance coefficient calculation module is used for calculating the distance coefficient by using the current distance coefficient and the historical distance coefficient.
The second distance coefficient calculation module includes:
the first weight distance coefficient calculation module is used for multiplying the current distance coefficient by the first weight coefficient to obtain a first weight distance coefficient;
the second weight distance coefficient calculation module is used for multiplying the historical distance coefficient by a second weight coefficient to obtain a second weight distance coefficient;
and the third distance coefficient calculation module is used for adding the first weight distance coefficient and the second weight distance coefficient to obtain a distance coefficient.
The apparatus in embodiment 2 may perform the method in the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part not described in detail in this embodiment. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
In a third embodiment of the moving image generation apparatus provided in the embodiment of the present disclosure, on the basis of the embodiment shown in fig. 4, the module performs the following steps:
a detection module 41 for detecting a palm using an image sensor;
an identification module 42 for identifying a smallest rectangle covering the palm;
a distance coefficient calculation module 43, configured to calculate a distance coefficient between the palm and the image sensor using one or more side lengths of the minimum rectangle;
an image generation module 44, configured to generate a moving image of the target object using the distance coefficient.
The detection module 41 includes:
the information acquisition module is used for acquiring color information of the image and position information of the color information by using the image sensor;
the contrast module is used for comparing the color information with preset palm color information;
the identification module is used for identifying first color information, and the error between the first color information and the preset palm color information is smaller than a first threshold value;
and the outline forming module is used for forming the outline of the palm by utilizing the position information of the first color information.
The moving image generation apparatus in this embodiment may execute the method of the embodiment shown in fig. 3, and for a part not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in fig. 3. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 3, and are not described herein again.
Fig. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, an electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
The memory 51 is used to store non-transitory computer readable instructions. In particular, memory 51 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 52 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions. In one embodiment of the present disclosure, the processor 52 is configured to execute the computer readable instructions stored in the memory 51, so that the electronic device 50 performs all or part of the steps of the moving image generation method of the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 6, a computer-readable storage medium 60, having non-transitory computer-readable instructions 61 stored thereon, in accordance with an embodiment of the present disclosure. The non-transitory computer readable instructions 61, when executed by a processor, perform all or part of the steps of the moving image generation method of the embodiments of the present disclosure described previously.
The computer-readable storage medium 60 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable nonvolatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM boxes).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 7 is a diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in fig. 7, the moving image generation terminal 70 includes the above-described moving image generation apparatus embodiment.
The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, mobile terminal devices such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation apparatus, a vehicle-mounted terminal device, a vehicle-mounted display terminal, a vehicle-mounted electronic rear view mirror, and the like, and fixed terminal devices such as a digital TV, a desktop computer, and the like.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 7, the multi-channel audio processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an a/V (audio/video) input unit 73, a user input unit 74, a sensing unit 75, an interface unit 76, a controller 75, an output unit 78, a memory 79, and the like. Fig. 7 shows a terminal having various components, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may alternatively be implemented.
The wireless communication unit 72 allows, among other things, radio communication between the terminal 70 and a wireless communication system or network. The a/V input unit 73 is for receiving an audio or video signal. The user input unit 74 may generate key input data to control various operations of the terminal device according to a command input by a user. The sensing unit 75 detects a current state of the terminal 70, a position of the terminal 70, presence or absence of a touch input of the user to the terminal 70, an orientation of the terminal 70, acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a command or signal for controlling an operation of the terminal 70. The interface unit 76 serves as an interface through which at least one external device can be connected with the terminal 70. The output unit 78 is configured to provide output signals in a visual, audio, and/or tactile manner. The memory 79 may store software programs or the like for processing and controlling operations performed by the controller 75, or may temporarily store data that has been output or is to be output. Memory 79 may include at least one type of storage media. Also, the terminal 70 may cooperate with a network storage device that performs a storage function of the memory 79 through a network connection. The controller 75 generally controls the overall operation of the terminal device. In addition, the controller 75 may include a multimedia module for reproducing or playing back multimedia data. The controller 75 may perform a mode recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 71 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 75.
Various embodiments of the moving image generation method presented in the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the moving image generation method proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the moving image generation method proposed by the present disclosure may be implemented in the controller 75. For software implementation, various embodiments of the moving image generation method proposed by the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 79 and executed by the controller 75.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "are used herein to mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" means a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A moving image generation method characterized by comprising:
detecting a target object using an image sensor;
identifying an outer frame of the target object;
calculating a distance coefficient between the target object and the image sensor by using one or more side lengths of the outer frame; wherein the distance coefficient is represented by a function related to the side length, the value of the function varying non-linearly with the value of the side length;
calculating the distances from the target object to the image sensor at a plurality of moments or in a plurality of image frames by using a plurality of distance coefficients to generate a continuous moving image of the target object; wherein, the moving image effect generated by the distance coefficient represented by different functions is different.
2. The moving image generation method as set forth in claim 1, wherein the calculating of the distance coefficient between the target object and the image sensor using one or more side lengths of the outer frame includes:
a current distance coefficient between the target object and the image sensor is calculated using one or more side lengths of the outer frame, and the distance coefficient is calculated using the current distance coefficient and the historical distance coefficient.
3. The moving image generation method according to claim 2, characterized in that:
the historical distance coefficient is a distance coefficient between a target object and the image sensor in the previous image frame; alternatively, the first and second electrodes may be,
the historical distance coefficient is a distance coefficient between a target object and the image sensor at the last moment; alternatively, the first and second electrodes may be,
the historical distance coefficient is an average value of a plurality of distance coefficients between the target object and the image sensor at a plurality of moments before the current moment or in a plurality of frames of images.
4. The moving image generation method according to claim 2 or 3,
the calculating the distance coefficient using the current distance coefficient and the historical distance coefficient includes:
multiplying the current distance coefficient by a first weight coefficient to obtain a first weight distance coefficient;
multiplying the historical distance coefficient by a second weight coefficient to obtain a second weight distance coefficient;
and adding the first weight distance coefficient and the second weight distance coefficient to obtain a distance coefficient.
5. The moving image generation method according to claim 4, wherein the generating of the moving image of the target object using the distance coefficient includes:
generating a moving image of the target object using a plurality of distance coefficients.
6. The moving image generation method according to claim 1 or 2, characterized in that:
the target object is a palm, and the outer frame is a minimum rectangle covering the palm.
7. The moving image generation method according to claim 6, characterized in that:
the length of the long side of the rectangle is L, the length of the wide side of the rectangle is W, and the distance coefficient is as follows:
f(x)=Axa
where x is L + W, a is a real number greater than 0, 0< a < 1.
8. The moving image generation method according to claim 6, wherein calculating the distance coefficient using the current distance coefficient and the historical distance coefficient includes:
f′(x)=αf(xn)+βf(xn-1)
wherein alpha is>0,β>0, alpha + beta is 1, and n is more than or equal to 2; f' (x) represents a distance coefficient, f (x)n) Representing the current distance coefficient, f (x)n-1) Indicating the distance coefficient at the last time instant.
9. The moving image generation method according to claim 6, wherein the detecting a target object using an image sensor includes:
acquiring color information of an image and position information of the color information using an image sensor;
comparing the color information with preset palm color information;
identifying first color information, wherein the error between the first color information and the preset palm color information is smaller than a first threshold value;
and forming the outline of the palm by using the position information of the first color information.
10. A moving image generation device, comprising:
a detection module for detecting a target object using an image sensor;
the identification module is used for identifying the outer frame of the target object;
the distance coefficient calculation module is used for calculating a distance coefficient between the target object and the image sensor by utilizing one or more side lengths of the outer frame; wherein the distance coefficient is represented by a function related to the side length, the value of the function varying non-linearly with the value of the side length;
the image generating module is used for calculating the distances from a target object to an image sensor at a plurality of moments or in a plurality of image frames by a plurality of distance coefficients and generating a continuous moving image of the target object; wherein, the moving image effect generated by the distance coefficient represented by different functions is different.
11. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the motion image generation method of any of claims 1-9.
12. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the moving image generation method according to any one of claims 1 to 9.
CN201810699053.5A 2018-06-29 2018-06-29 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium Active CN108961314B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810699053.5A CN108961314B (en) 2018-06-29 2018-06-29 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
PCT/CN2019/073077 WO2020001016A1 (en) 2018-06-29 2019-01-25 Moving image generation method and apparatus, and electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810699053.5A CN108961314B (en) 2018-06-29 2018-06-29 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108961314A CN108961314A (en) 2018-12-07
CN108961314B true CN108961314B (en) 2021-09-17

Family

ID=64484574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810699053.5A Active CN108961314B (en) 2018-06-29 2018-06-29 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN108961314B (en)
WO (1) WO2020001016A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961314B (en) * 2018-06-29 2021-09-17 北京微播视界科技有限公司 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN112001937B (en) * 2020-09-07 2023-05-23 中国人民解放军国防科技大学 Group chase and escape method and device based on visual field perception
CN113838118A (en) * 2021-09-08 2021-12-24 杭州逗酷软件科技有限公司 Distance measuring method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872414A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of removing false targets
CN102999152A (en) * 2011-09-09 2013-03-27 康佳集团股份有限公司 Method and system for gesture recognition
CN103345301A (en) * 2013-06-18 2013-10-09 华为技术有限公司 Depth information acquisition method and device
CN105427371A (en) * 2015-12-22 2016-03-23 中国电子科技集团公司第二十八研究所 Method for keeping graphic object equal-pixel area display in three-dimensional perspective projection scene
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866821B2 (en) * 2009-01-30 2014-10-21 Microsoft Corporation Depth map movement tracking via optical flow and velocity prediction
CN101929836B (en) * 2009-06-25 2012-11-28 深圳泰山在线科技有限公司 Object dimensional positioning method and camera
CN105427361B (en) * 2015-11-13 2018-06-08 中国电子科技集团公司第二十八研究所 The display methods of moving-target track in a kind of three-dimensional scenic
CN108961314B (en) * 2018-06-29 2021-09-17 北京微播视界科技有限公司 Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872414A (en) * 2010-02-10 2010-10-27 杭州海康威视软件有限公司 People flow rate statistical method and system capable of removing false targets
CN102999152A (en) * 2011-09-09 2013-03-27 康佳集团股份有限公司 Method and system for gesture recognition
CN103345301A (en) * 2013-06-18 2013-10-09 华为技术有限公司 Depth information acquisition method and device
CN105488815A (en) * 2015-11-26 2016-04-13 北京航空航天大学 Real-time object tracking method capable of supporting target size change
CN105427371A (en) * 2015-12-22 2016-03-23 中国电子科技集团公司第二十八研究所 Method for keeping graphic object equal-pixel area display in three-dimensional perspective projection scene
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Tracking Method for Moving Object Using Depth Picture";Kwon S K等;《Journal of Korea Multimedia Society》;20160430;第19卷(第4期);全文 *
"基于Kinect的人体目标检测与跟踪";杨林;《中国优秀硕士学位论文全文数据库·信息科技辑》;20130915;第2013年卷(第9期);全文 *

Also Published As

Publication number Publication date
WO2020001016A1 (en) 2020-01-02
CN108961314A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN109684920B (en) Object key point positioning method, image processing method, device and storage medium
CN108121986B (en) Object detection method and device, computer device and computer readable storage medium
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN108986016B (en) Image beautifying method and device and electronic equipment
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
WO2017206400A1 (en) Image processing method, apparatus, and electronic device
CN109583509B (en) Data generation method and device and electronic equipment
CN108961314B (en) Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN110072046B (en) Image synthesis method and device
CN108875931B (en) Neural network training and image processing method, device and system
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
WO2019228316A1 (en) Action recognition method and apparatus
CN109064387A (en) Image special effect generation method, device and electronic equipment
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN111553838A (en) Model parameter updating method, device, equipment and storage medium
CN109218615A (en) Image taking householder method, device, terminal and storage medium
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium
WO2020037924A1 (en) Animation generation method and apparatus
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment
CN110765926B (en) Picture book identification method, device, electronic equipment and storage medium
KR102160955B1 (en) Method and apparatus of generating 3d data based on deep learning
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
CN113222841A (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant