CN114399425A - Image processing method, video processing method, device, equipment and medium - Google Patents

Image processing method, video processing method, device, equipment and medium Download PDF

Info

Publication number
CN114399425A
CN114399425A CN202111592448.3A CN202111592448A CN114399425A CN 114399425 A CN114399425 A CN 114399425A CN 202111592448 A CN202111592448 A CN 202111592448A CN 114399425 A CN114399425 A CN 114399425A
Authority
CN
China
Prior art keywords
light beam
image
color data
color
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111592448.3A
Other languages
Chinese (zh)
Inventor
沈怀烨
吕烨华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111592448.3A priority Critical patent/CN114399425A/en
Publication of CN114399425A publication Critical patent/CN114399425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an image processing method, a video processing method, a device, equipment and a medium. The image processing method comprises the following steps: acquiring an initial image, and extracting at least one group of color data in the initial image; forming corresponding light beams based on any one of the at least one set of color data; superposing the light beams to form a light beam effect graph corresponding to the initial image; and superposing the light beam effect image into the initial image to form a light beam effect image. And the light beam in the light beam effect image is the light beam emitted by a virtual light source simulating holographic projection, and the light beam effect image is superposed into the initial image to form the light beam effect image. By adding a simulated light beam in the image, a simulated effect of adding a holographic projection in the image is achieved.

Description

Image processing method, video processing method, device, equipment and medium
Technical Field
The embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to an image processing method, a video processing method, an apparatus, a device, and a medium.
Background
With the demand for image and video presentation, adding special effects to image frames in images or videos becomes a common processing method. However, the current special effect processing modes still have a few kinds and are not enough to meet the requirements of users.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method, a video processing method, a device, equipment and a medium, so as to add a projection beam effect on the basis of an image.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring an initial image, and extracting at least one group of color data in the initial image;
forming corresponding light beams based on any one of the at least one set of color data respectively;
superposing the light beams to form a light beam effect graph corresponding to the initial image;
and superposing the light beam effect image into the initial image to form a light beam effect image.
In a second aspect, an embodiment of the present disclosure further provides a video processing method, including:
acquiring video data to be projected, and adjusting the tone of the video data into a projection tone;
for an image frame in the video data after the color adjustment, extracting at least one group of color data in the image frame, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form a light beam effect graph corresponding to the image frame, and overlapping the light beam effect graph to the image frame to obtain special effect video data.
In a third aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
the color data extraction module is used for acquiring an initial image and extracting at least one group of color data in the initial image;
the light beam generation module is used for respectively forming corresponding light beams based on any one group of color data in the at least one group of color data;
the light beam effect generation module is used for superposing the light beams to form a light beam effect graph corresponding to the initial image;
and the light beam effect image generation module is used for superposing the light beam effect image into the initial image to form a light beam effect image.
In a fourth aspect, an embodiment of the present disclosure further provides a video processing apparatus, including:
the video data acquisition module is used for acquiring video data to be projected;
the tone adjusting module is used for adjusting the tone of the video data into a projection tone;
the video processing module is used for extracting at least one group of color data in the image frames in the video data after the color adjustment, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form light beam effect graphs corresponding to the image frames, and overlapping the light beam effect graphs into the image frames to obtain special effect video data.
In a fifth aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an image processing method or a video processing method as in any of the embodiments of the present disclosure.
In a sixth aspect, the present disclosure also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the image processing method or the video processing method according to any one of the embodiments of the present disclosure.
According to the technical scheme, at least one group of color data is extracted from an obtained initial image, corresponding light beams are formed based on each group of color data, the light beams generated by each group of color data are overlapped to form a light beam effect graph corresponding to the initial image, the light beams in the light beam effect graph are light beams emitted by a virtual light source simulating holographic projection, and the light beam effect graph is overlapped into the initial image to form a light beam effect image. By adding simulated light beams in the image, the effect of adding projection simulation special effects in the image is realized.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the disclosure;
FIG. 2 is a schematic view of an initial beam provided by embodiments of the present disclosure;
FIG. 3 is a schematic diagram of a converging light beam provided by an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a target beam provided by embodiments of the present disclosure;
FIG. 5 is a schematic diagram of the beam effect of an initial image provided by an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
FIG. 7 is a schematic view of a transparent template provided by embodiments of the present disclosure;
fig. 8 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Fig. 1 is a schematic flow chart of an image processing method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is suitable for a situation where a light beam effect simulating holographic projection is set in an image or a video, and the method may be executed by an image processing apparatus provided by the embodiment of the present disclosure, where the image processing apparatus may be implemented in a form of software and/or hardware, and optionally, implemented by an electronic device, where the electronic device may be a mobile terminal or a PC terminal, and the like. As shown in fig. 1, the method of the present embodiment includes:
s110, obtaining an initial image, and extracting at least one group of color data in the initial image.
And S120, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data.
And S130, overlapping the light beams to form a light beam effect graph corresponding to the initial image.
And S140, overlapping the light beam effect graph to the initial image to form a light beam effect image.
In this embodiment, the initial image is a processed image, and the simulated light beam effect of the holographic projection is added to the initial image to obtain an effect image simulating the holographic projection. The image may be a real-time captured image or an imported image. In some embodiments, the initial image is a partial image frame or a whole image frame in the video, and for example, the simulated light beam effect of the holographic projection is added to each image frame in the video to obtain an effect video simulating the holographic projection. In some embodiments, the initial image is an image frame acquired by a camera device in real time, in a live broadcast process, for example, the image frame is acquired in real time, and a light beam effect is added to a local image frame or all the acquired image frames respectively. Illustratively, in a display interface of the live broadcast device, a special effect processing control may be configured, and when it is monitored that the special effect processing control is selected, a light beam effect is set for each acquired image.
The light beam effect in the embodiment is generated based on the color data in the initial image, correspondingly, the light beam effects corresponding to different initial images are different, the light beam effect is changed along with the change of the image, the randomness and the flexibility of the light beam effect are improved, and the condition that the simulation effect is poor due to the fixed light beam effect is avoided.
At least one set of color data is extracted from the initial image, any one of the at least one set of color data being usable to generate an analog light beam. In some embodiments, the color data in the pixel rows in the initial image may be extracted as a set of color data, the pixel rows may be pixel rows at any position in the initial image, and may be randomly determined, and the plurality of pixel rows do not overlap.
In some embodiments, the color data in any pixel column in the initial image may be extracted as a set of color data, the pixel column may be any pixel column in the initial image, and may be randomly determined, and the plurality of pixel columns do not overlap.
In some embodiments, the color data of the pixel points on any extraction line in the initial image may be extracted as a set of color data, where the extraction line may be a straight line at any angle in the initial image, and the extraction line may be a horizontal extraction line, a vertical extraction line, or an oblique line at any inclination angle.
In some embodiments, there may be multiple sets of color data for generating multiple light beams, and the number of sets of color data is not limited and may be set according to user requirements. For example, the number of groups of color data may be preset, or a parameter input by a user may be collected according to an input control of the interactive interface, where the parameter may include the number of groups of color data, where the input control may be an input box, a data selection slider, a data increase/decrease control, or the like, and the data input function is provided without limitation.
The color data may be extracted based on one or more of randomly determined pixel rows, pixel columns, and extraction lines in the initial image, wherein the extraction manner of any one set of color data may be randomly determined. Optionally, the sets of color data in the same initial image may be determined based on one extraction method, or may be determined based on different extraction methods. In some embodiments, the extraction manner of the user input may be collected according to a virtual control of the interactive interface, where the extraction manner includes pixel row extraction, pixel column extraction, random line extraction, hybrid extraction, and the like.
In some embodiments, extracting at least one set of color data in the initial image may be detecting a color extraction operation input by a user on the interactive interface, the corresponding color data being derived based on the color extraction operation. The color extraction operation may be a sliding touch operation, the color data on a sliding track corresponding to the sliding touch operation is extracted, the color data on the continuous sliding track is used as a set of color data, or the color data on the sliding track is collected, the collected color data is grouped based on a preset number, and a plurality of sets of color data are obtained through division. The color extraction operation may also be a row/column selection touch operation, that is, the color extraction operation is a click operation, a position point corresponding to the color extraction operation is determined, and color data corresponding to a pixel row or a pixel column where the position point is located is determined as a set of color data.
If the initial image belongs to an image frame in a video or an image frame in a live video stream, the same setting may be performed for all images in the video/video stream or for each image after the time stamp of the initial image based on the above setting.
In the embodiment, the interaction control is arranged on the interaction interface, so that the interaction between the user and the electronic equipment is realized, the corresponding light beam effect is generated according to the setting of the user, and the interactivity of the light beam effect in the generation process is improved.
In some embodiments, after acquiring the initial image, further comprising: and segmenting the initial image to obtain a beam reference object, and forming a beam reference image based on the segmented beam reference object and a preset background. The beam reference object may be a simulated object for holographic projection in the initial image, for example, the beam reference object may include, but is not limited to, a person, an animal, and the like. The beam reference object may be designated by a user or may be automatically recognized, for example, taking the beam reference object as a person, for each initial image, a person image in the initial image is recognized, and the person image is automatically recognized and divided, so as to obtain a divided beam reference object. For example, the beam reference object may be a specific person, which may be identified in the initial image according to preset personal information, and the beam reference object may be divided when the specific person is included in the initial image. And adding a preset background to the divided light beam reference object to form a light beam reference image, wherein the preset background can be a single-color background, such as a white background or a black background, and the background in the initial image is replaced by the single-color background, so that the color interference of the original background is reduced.
Correspondingly, the extracting at least one set of color data in the initial image includes: at least one set of color data in the beam reference image is extracted. The manner of extracting each set of color data from the light beam reference image is the same as the manner of extracting the color data from the initial image in the above embodiment, and is not described herein again.
In some embodiments, the at least one set of color data is color data on extraction lines in an initial image or a beam reference image, the extraction lines being rows or columns of pixels in the initial image or the beam reference image. The color data are extracted through the extraction lines, the degree of change of colors in the same group of color data is improved, the condition of single color is avoided, and the color effect of the light beam is improved.
Each set of color data is processed to obtain a corresponding light beam. Optionally, forming corresponding light beams based on any one of the at least one set of color data includes: for any group of color data in at least one group of color data, respectively forming a color line based on each color value in the color data, and forming an initial light beam corresponding to the color data; and gathering the initial light beam based on a virtual light source, and splitting and extracting the gathered light beam to obtain a target light beam corresponding to the color data.
The direction of the color line is determined according to the simulated projection direction of the holographic projection, and illustratively, the simulated projection direction is from bottom to top, or from top to bottom, then the direction of the color line is a vertical direction, and the simulated projection direction is from left to right, or from right to left, then the direction of the color line is a horizontal direction. In this embodiment, each set of color data is extracted based on an extraction line such as a pixel row/a pixel column, corresponding color data is sorted according to a position of each pixel point on the extraction line, a plurality of color lines having the same positional relationship are formed based on the positional relationship of each color data, the color line corresponding to each color data forms an initial light beam, for example, taking the extraction of color data based on a pixel row as an example, color data (e.g., data a) of a first pixel point on a pixel row forms a first color line, which may be a numerical color line, i.e., a first pixel column, the color data of each pixel point on the pixel column is the same (i.e., data a), color data (e., data b) of a second pixel point on a pixel row forms a second color line, i.e., a second pixel column, and so on, the initial light beam is obtained. Illustratively, referring to fig. 2, fig. 2 is a schematic diagram of an initial light beam provided by an embodiment of the present disclosure.
In order to simulate the beam effect of holographic projection, the initial beam is gathered to simulate the effect of the virtual light source on the emitted beam. Optionally, the gathering processing is performed based on a setting position of the virtual light source, where the setting position of the virtual light source may be preset, or may be determined according to a simulated projection direction of the holographic projection. Taking the simulated projection direction as a bottom-up example, the setting position of the virtual light source may be the bottom center position of the initial light beam, and taking the simulated projection direction as a top-down example, the setting position of the virtual light source may be the top center position of the initial light beam.
Optionally, gathering the initial light beam based on a virtual light source includes: and determining a light beam range of the gathered light beam based on the virtual light source, and extracting color data in the light beam range from the initial light beam to obtain the gathered light beam. Referring to fig. 2, fig. 2 includes an initial light beam and a background, and the gathering process in this embodiment removes the background in the gathering process for the initial light beam, so as to avoid interference of the background.
The light beam range of the gathered light beam is a triangular range in which the position of the gathered light beam is used as a vertex, and the width of the initial light beam is used as the width of a bottom edge, for example, refer to fig. 3, fig. 3 is a schematic view of the gathered light beam provided by the embodiment of the present disclosure, wherein the position of the vertex of the light beam in fig. 3 is the setting position of the virtual light source. The method comprises the steps that each pixel point outside a light beam range of a gathered light beam is set to be a background color, color data of each pixel point in the light beam range of the gathered light beam is determined based on the color data of the corresponding pixel point in an initial light beam, specifically, any pixel point in the light beam range of the gathered light beam is subjected to color data corresponding to the pixel coordinate based on the pixel coordinate of the pixel point, the color data corresponding to the pixel coordinate are extracted from the initial light beam, and the pixel point in the light beam range of the gathered light beam is correspondingly set based on the extracted color data so as to form the gathered light beam.
The gathered light beam is a triangular light beam and is not in line with the projection effect, in order to improve the simulation authenticity of holographic projection, the gathered light beam is subjected to beam splitting treatment, the gathered light beam is divided into a plurality of sub-light beams, and the sub-light beams with identification degrees are extracted from the plurality of sub-light beams to form the target light beam.
Optionally, the splitting and extracting the gathered light beam to obtain the target light beam corresponding to the color data includes: determining the color difference value of adjacent light beams in the gathered light beams based on a preset step length in the width direction of the light beams; and extracting target split beams from the gathered light beams based on the color difference value to form target light beams. Taking the clustered beam in fig. 3 as an example, the image of the clustered beam may be set in the UV coordinate, the simulated projection direction of which is from bottom to top, i.e. projection in the y direction, and correspondingly, the width direction of the beam is the x direction. And dividing the gathered light beam into a plurality of sub-light beams based on a preset step length, wherein the sub-light beams are divided on the widest side of the gathered light beam based on the preset step length so as to provide the precision of light beam division. The preset step length may be preset, or may be adjusted according to a user requirement, for example, a step length adjustment control may be set on the interactive interface to obtain a step length parameter input by the user. For example, the preset step size may be 0.05 in the UV coordinate.
And acquiring color data of each divided sub-beam, wherein the color data of the sub-beam may be color data of a central color line of the sub-beam, or may be a color average value of each color line in the sub-beam, which is not limited herein. The color difference value of the adjacent sub-beams can be determined by comparing the color difference value with a judgment threshold value to determine the identification degree of the sub-beams, and further determining whether the sub-beams are reserved. By discarding the sub-beams with small color variation and poor identification degree, the target beam with identification degree is obtained.
In some embodiments, extracting a target split at the converging light beam based on the color difference value may be: and for any sub-beam in the converged beams, determining a first color difference value of the current sub-beam and a first adjacent sub-beam and a second color difference value of the current sub-beam and a second adjacent sub-beam, and comparing the sum of the difference values of the first color difference value and the second color difference value with a preset threshold value. And if the sum of the differences is greater than or equal to the preset threshold, indicating that the color change is large, retaining the current sub-beam, and if the sum of the differences is less than the preset threshold, indicating that the color change is small, and discarding the current sub-beam. The preset threshold may be set according to the user requirement, and in some embodiments, the preset threshold may be in a range of 0.5-0.9, for example, may be 0.6.
Illustratively, referring to fig. 4, fig. 4 is a schematic diagram of a target beam provided by an embodiment of the present disclosure. The target beam is generated based on the extracted set of color data. An object beam such as that of fig. 4 can be formed for each set of color data, and a plurality of object beams are superposed to obtain a beam effect map of the initial image, wherein the beam effect map is formed based on the virtual light sources of the object beams, i.e. the virtual light source positions of the light beams are superposed at the same position. The beam effect map is added to the initial image, which may be, for example, a background of the culled beam effect map, and the extracted beam is superimposed on the initial image. Optionally, the extracted light beams are superimposed based on a holographic projection simulation object in the initial image. Specifically, the position of the light beam corresponding to the virtual light source is determined according to the projection simulation position in the initial image, and the extracted light beam is superimposed based on the position of the virtual light source. For example, the position of the light beam corresponding to the virtual light source may be determined based on the simulated projection direction and the projection simulated position, where the simulated projection direction is taken as a bottom-up example, and the position of the light beam corresponding to the virtual light source is a bottom of the position of the holographic projection simulation object. Taking the simulation projection direction as from top to bottom as an example, the position of the light beam corresponding to the virtual light source is the top of the position of the holographic projection simulation object.
In some embodiments, superimposing the beam effect map into the initial image, forming the beam effect image may further be: and adding the beam effect image into the beam reference image to form a simulated image for performing holographic projection simulation on the beam reference object.
On the basis of the above embodiment, after the respective light beams are formed based on any one set of color data, before being superimposed on the initial image, the method further includes: the transparency of the light beams may be set, for example, for each generated light beam, or for partial light beams. Wherein the transparency increases in sequence along the projection direction of the light beam. Since there is dissipation of light as it travels through space, the color change of light becomes gradually lighter in the projection direction. In order to improve the simulation effect of holographic projection, the transparency of the obtained light beam is set to simulate the change of the light beam in space. The transparency of the light beam may be in the range of 0-100%, the greater the value of transparency, the higher the transparency.
In this embodiment, the transparency of the light beam is set to increase sequentially along the projection direction of the light beam, i.e., the transparency of the light beam at the virtual light source position is 0, the transparency at the end of the light beam may be 100%, and the transparency may be uniformly changed in the projection direction. Exemplarily, referring to fig. 5, fig. 5 is a schematic diagram of a beam effect of an initial image provided by an embodiment of the present disclosure. Fig. 5 includes a plurality of light beams superimposed, and transparency set to the superimposed light beams. The light beam in fig. 5 is added to the original image, achieving the effect of simulating the holographic projection technique in the image.
And for videos comprising a plurality of initial images or video streams acquired in real time in a live broadcast process, sequentially determining the light beam effect corresponding to each image based on the above mode, and respectively adding the light beam effects to the corresponding images to obtain the simulated videos or the simulated video streams with projection effects.
According to the technical scheme provided by the embodiment, at least one group of color data is extracted from an obtained initial image, a corresponding light beam is formed based on each group of color data, light beams generated by one or more groups of color data are overlapped to form a light beam effect diagram corresponding to the initial image, the light beam in the light beam effect diagram is a light beam emitted by a virtual light source simulating holographic projection, and the light beam effect diagram is overlapped into the initial image to form a light beam effect image. By adding simulated light beams in the image, a simulated projection effect in the image is achieved.
On the basis of the foregoing embodiments, the present disclosure further provides a video processing method, see fig. 6, and fig. 6 is a schematic flow chart of the video processing method provided by the embodiments of the present disclosure, and the embodiments of the present disclosure are suitable for simulating a situation of holographic projection by adding a light beam effect of holographic projection to a video, and the method may be executed by a video processing apparatus provided by the embodiments of the present disclosure, and the video processing apparatus may be implemented in a form of software and/or hardware, and optionally, implemented by an electronic device, and the electronic device may be a mobile terminal or a PC terminal, and the like.
S210, video data to be projected are obtained, and the tone of the video data is adjusted to be a projection tone.
S220, extracting at least one group of color data in the image frame of the video data after the color adjustment, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form a light beam effect image corresponding to the image frame, and overlapping the light beam effect image into the image frame to obtain special effect video data.
The tone of the virtual image formed by the holographic projection technology is different from the standard tone of the image, and in order to improve the simulation reality of the holographic projection, the tone of the video data is adjusted to the projection tone.
In this embodiment, a conversion relationship between a projection tone and a standard tone is preset, and color data of a pixel point in each frame image in video data is subjected to tone conversion based on the conversion relationship to obtain video data satisfying the projection tone, so as to improve the simulation reality of holographic projection.
It should be noted that the projection color tone may be a fixed one, or a plurality of projection color tones may be set according to different projection scenes. Accordingly, the conversion relationship between the projected tone and the standard tone can be changed along with the change of the projected tone, so that data conversion meeting any projected tone is realized, for example, the conversion relationship between a plurality of projected tones and standard tones can be stored in advance, and the conversion relationship can be called conveniently.
In some embodiments, the projected color tone comprises a blue color tone. Optionally, the adjusting the color tone of the video data to a projection color tone includes: and converting the color data of the image frame in the video data into color data corresponding to the blue tone according to the conversion relation between the channel color data under the current tone and the channel color data in the blue tone.
The image frame in the video data may be an RGB image, and the color data of the pixel point in the image frame may be (R, G, B, a), where R, G, B is data of three channels of red, green and blue, respectively, and a is the transparency of the pixel point. The tone of the image is a standard tone, and the image is converted to an image in a blue tone. For example, the color data conversion relationship between the current hue and the blue hue may be:
T.r=E.r/2.5
T.g=E.g/2.5
T.b=(E.r+E.g+E.b)/3.0
T.a=E.a
wherein E.r, E.g, E.b and e.a are data of three channels of red, green and blue at the current hue and transparency data, respectively, and T.r, T.g, T.b and T.a are data of three channels of red, green and blue at the blue hue and transparency data, respectively.
Since there is a certain jitter in the simulated projection, in order to improve the reality of the simulation, on the basis of the above embodiment, before adjusting the color tone of the video data to the projection color tone, the method further includes: and carrying out random dithering on the video data. Illustratively, a dithering parameter is generated for any frame of image, the color data of the image is updated based on the dithering parameter, and the dithering parameter is added to the original color data. Wherein the random jitter parameter is generated based on time stamps of different images to increase randomness of the jitter parameter.
In some embodiments, random dithering may be performed on a coordinate system where the video data is located, a dithering parameter at each time is generated, the coordinate system is updated based on the dithering parameter, and color data is extracted from the updated coordinate system, so that dithering is set on the video data.
A projection beam effect, which may be generated based on the image processing method provided in the above-described embodiment, is added to each frame image in the tone-converted video. For example, each frame image in the video data is used as an initial image, at least one set of color data in the initial image is extracted, corresponding light beams are respectively formed based on any one set of color data in the at least one set of color data, the generated partial or all light beams are overlapped to form a light beam effect image corresponding to the initial image, the light beam effect image is overlapped into the corresponding initial image to form a light beam effect image, and each frame is added with the light beam effect image to form video data of analog projection, namely the processed special effect video data.
In some embodiments, the special effect video data is displayed based on an augmented reality mode, a virtual image is formed in a display space, the virtual image has a projection tone, and a projection simulation light beam is projected, so that effect simulation of the holographic projection technology based on the augmented reality technology is realized.
In some embodiments, after the video data is acquired, the portrait in each frame of image in the video data may also be extracted, that is, the background in each frame of image is removed, and replaced with a monochrome background, such as a black, white background, or transparent background. The simulation processing of the projection effect is executed on the video data after the background is replaced, and the holographic projection of characters in the video data can be simulated in the process of displaying the special effect video data, so that the interference of the background data in the video data is avoided.
According to the technical scheme provided by the embodiment, the video data is adjusted to the projection tone of the holographic projection, and the light beam effect simulating the holographic projection is added in each image, so that the simulation of the holographic projection based on the video data is realized.
On the basis of the above embodiments, the present disclosure also provides a preferred example of a video processing method. The video data to be processed is obtained, because certain jitter exists during simulated projection, random jitter data N is obtained by using a time parameter T, wherein N is very small, the random jitter data N is superposed on a UV coordinate, namely fuv (uv.x + N.x, uv.y + N.y), and color data of an image are collected on the UV coordinate, so that certain jitter exists, and the reality of simulation is improved.
The tone of the video data is converted into a blue tone by increasing the weight of the blue channel through conversion formulas of T.r-E.r/2.5, T.g-E.g/2.5, T.b-E.r + E.g + E.b)/3.0 and T.a-e.a.
The beam effect is added to each frame image. For each frame of image in the video data, a projection object in the image, such as a portrait in the image, is extracted by matting, and the background of the portrait is set to black. Within a shader for texture rendering, UV coordinates are used to sample color information in an image, i.e. the coordinates of each pixel in the image are (x, y), such as: and uv is (0.5 ), the color data sampled by the uv is used, the color numerical type is (R, G, B, A), the color numerical type respectively represents red, green, blue and transparency channels, and the numerical range of each channel is 0-1.
The pixel color (i.e., a set of color data) on y0 is selected to replace all colors in the y direction, forming a stripe of one beam in the vertical direction (i.e., the initial beam), see fig. 2. Setting the light source point of the light beam to be P (0.5,0), transforming the initial light beam, and converging the initial light beam to the P point to obtain a converged light beam, which is shown in FIG. 3. In the light beam in the figure 3, one light beam is divided into sub-light beams, pixel comparison in the x direction is carried out, the sub-light beams with larger pixel value difference are reserved, and the sub-light beams with smaller pixel value difference are abandoned. Specifically, for the current uv-position (p, Q), the sampled pixel is Q0, the uv-position (p +0.05, Q) is taken as Q1, the uv-position (p-0.05, Q) is taken as Q2, the difference value L10-Q1-Q0 is obtained, the difference value L21-Q2-Q1 is obtained, and the difference value D-L10 + L21 is obtained, if D <0.6, it is described that the color change is small here, the color change is discarded, otherwise, the target light beam is retained, and see fig. 4.
The light beam is gradually weakened along with the distance after being emitted, and the strength of the light beam is controlled through the preset transparency. A transparent template M is provided, and referring to fig. 7 for example, fig. 7 is a schematic diagram of a transparent template provided by the embodiment of the disclosure. The transparency value a1 in fig. 7 indicates that the white is higher in transparency and the black is lower in transparency, and the projection direction in fig. 7 is from bottom to top, and the transparency gradually increases along the projection line. And superposing the transparency in the transparent template M and the obtained target light beam to finish a group of light beam effects. And respectively forming corresponding light beam effects based on the corresponding color data of the plurality of y0, and superposing the light beam effects to form a light beam effect image, which is shown in fig. 5.
Adding the light beam effect in the light beam effect image corresponding to each frame image into the corresponding image to form special effect video data. The special effect video data is put into an AR scene for displaying, and interaction such as human conversation and the like between the special effect video data and the video data in the AR scene can be realized.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus includes:
a color data extraction module 310, configured to obtain an initial image, and extract at least one set of color data in the initial image;
a light beam generation module 320 for respectively forming corresponding light beams based on any one of the at least one set of color data;
a light beam effect generating module 330, configured to superimpose the light beams to form a light beam effect map corresponding to the initial image;
and the beam effect image generation module 340 is configured to superimpose the beam effect map into the initial image to form a beam effect image.
On the basis of the above embodiment, the apparatus further includes:
the light beam reference image generation module is used for segmenting an initial image to obtain a light beam reference object from the initial image after the initial image is obtained, and forming a light beam reference image based on the segmented light beam reference object and a preset background;
the color data extraction module 310 is configured to: at least one set of color data in the beam reference image is extracted.
On the basis of the above embodiment, the at least one set of color data is color data on an extraction line in the initial image or the beam reference image, and the extraction line is a pixel row or a pixel column in the initial image or the beam reference image.
On the basis of the above embodiment, the light beam generation module 320 includes:
the initial light beam forming unit is used for forming a color line for any group of color data in the at least one group of color data based on each color value in the color data respectively to form an initial light beam corresponding to the color data;
the light beam gathering unit is used for gathering the initial light beam based on the virtual light source;
and the target light beam generating unit is used for splitting and extracting the gathered light beams to obtain target light beams corresponding to the color data.
Optionally, the beam-focusing unit is configured to:
and determining a light beam range of the gathered light beam based on the virtual light source, and extracting color data in the light beam range from the initial light beam to obtain the gathered light beam.
Optionally, the object beam generating unit is configured to:
determining the color difference value of adjacent light beams in the gathered light beams based on a preset step length in the width direction of the light beams;
and extracting target split beams from the gathered light beams based on the color difference value to form target light beams.
On the basis of the above embodiment, the apparatus further includes:
and the transparency setting module is used for setting the transparency of the light beams after the corresponding light beams are respectively formed on the basis of any one group of color data in the at least one group of color data, wherein the transparency is sequentially increased along the projection direction of the light beams.
On the basis of the above embodiment, the initial image is an image frame in a video, or the initial image is an image frame acquired by a camera in real time;
correspondingly, the image frames in the video are respectively added with the light beam effect to form a light beam effect video, or the image frames collected in real time are respectively added with the light beam effect to form a real-time light beam effect video stream.
The device provided by the embodiment of the disclosure can execute the method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. As shown in fig. 9, the apparatus includes:
a video data obtaining module 410, configured to obtain video data to be projected;
a tone adjustment module 420 for adjusting the tone of the video data to a projected tone;
the video processing module 430 is configured to extract at least one group of color data in the image frame of the video data after the color adjustment, respectively form corresponding light beams based on any one group of color data in the at least one group of color data, overlap the light beams, form a light beam effect map corresponding to the image frame, and overlap the light beam effect map into the image frame to obtain special effect video data.
On the basis of the above embodiment, the apparatus further includes:
and the dithering processing module is used for carrying out random dithering processing on the video data before adjusting the tone of the video data into the projection tone.
On the basis of the above embodiment, the projected tones include blue tones;
the tone adjustment module 420 is configured to: and converting the color data of the image frame in the video data into color data corresponding to the blue tone according to the conversion relation between the channel color data under the current tone and the channel color data in the blue tone.
On the basis of the above embodiment, the apparatus further includes:
and the special effect video data display module is used for displaying the special effect video data based on an augmented reality mode.
The device provided by the embodiment of the disclosure can execute the method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Referring now to fig. 10, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 10) 400 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
The electronic device provided by the embodiment of the present disclosure is the same as the image processing method or the video processing method provided by the above embodiment, and the technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
The disclosed embodiments provide a computer storage medium on which a computer program is stored, which when executed by a processor implements the image processing method or video processing method provided by the above embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring an initial image, and extracting at least one group of color data in the initial image;
forming corresponding light beams based on any one group of color data respectively;
superposing the light beams to form a light beam effect graph corresponding to the initial image;
and superposing the light beam effect image into the initial image to form a light beam effect image.
Alternatively, the first and second electrodes may be,
acquiring video data to be projected, and adjusting the tone of the video data into a projection tone;
and setting a projection light beam effect for each frame image in the video data after the tone adjustment to form special effect video data.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit/module does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided an image processing method, including:
acquiring an initial image, and extracting at least one group of color data in the initial image;
forming corresponding light beams based on any one of the at least one set of color data respectively;
superposing the light beams to form a light beam effect graph corresponding to the initial image;
and superposing the light beam effect image into the initial image to form a light beam effect image.
According to one or more embodiments of the present disclosure, [ example two ] there is provided an image processing method, further comprising:
after the initial image is acquired, the method further comprises the following steps: dividing the initial image to obtain a light beam reference object, and forming a light beam reference image based on the divided light beam reference object and a preset background;
the extracting at least one set of color data in the initial image comprises: at least one set of color data in the beam reference image is extracted.
According to one or more embodiments of the present disclosure, [ example three ] there is provided an image processing method, further comprising:
the at least one set of color data is color data on an extraction line in the initial image or the light beam reference image, and the extraction line is a pixel row or a pixel column in the initial image or the light beam reference image.
According to one or more embodiments of the present disclosure, [ example four ] there is provided an image processing method, further comprising:
any one of the at least one set of color data forms a corresponding light beam, respectively, including: for any one set of color data in the at least one set of color data, forming a color line based on color values in the color data, and forming an initial light beam corresponding to the color data; and gathering the initial light beam based on a virtual light source, and splitting and extracting the gathered light beam to obtain a target light beam corresponding to the color data.
According to one or more embodiments of the present disclosure, [ example five ] there is provided an image processing method, further comprising:
the gathering of the initial light beam based on the virtual light source comprises: and determining a light beam range of the gathered light beam based on the virtual light source, and extracting color data in the light beam range from the initial light beam to obtain the gathered light beam.
According to one or more embodiments of the present disclosure, [ example six ] there is provided an image processing method, further comprising:
performing beam splitting extraction on the gathered light beams to obtain target light beams corresponding to the color data, wherein the target light beams comprise: determining the color difference value of adjacent light beams in the gathered light beams based on a preset step length in the width direction of the light beams; and extracting target split beams from the gathered light beams based on the color difference value to form target light beams.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided an image processing method, further comprising:
after forming the corresponding light beams based on any one of the at least one set of color data, respectively, the method further comprises: and setting the transparency of the light beams, wherein the transparency is sequentially increased along the projection direction of the light beams.
According to one or more embodiments of the present disclosure, [ example eight ] there is provided an image processing method, further comprising:
the initial image is an image frame in a video, or the initial image is a frame image acquired by a camera device in real time;
and respectively adding a light beam effect to image frames in the video to form a light beam effect video, or respectively adding a light beam effect to image frames acquired in real time to form a real-time light beam effect video stream.
According to one or more embodiments of the present disclosure, [ example nine ] there is provided a video processing method comprising:
acquiring video data to be projected, and adjusting the tone of the video data into a projection tone;
for an image frame in the video data after the color adjustment, extracting at least one group of color data in the image frame, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form a light beam effect graph corresponding to the image frame, and overlapping the light beam effect graph to the image frame to obtain special effect video data.
According to one or more embodiments of the present disclosure, [ example ten ] there is provided a video processing method, further comprising:
prior to adjusting the tone of the video data to a projected tone, the method further comprises: and carrying out random dithering on the video data.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a video processing method, further comprising:
the projected hues comprise a blue hue;
the adjusting the color tone of the video data to a projected color tone includes: and converting the color data of the image frame in the video data into color data corresponding to the blue tone according to the conversion relation between the channel color data under the current tone and the channel color data in the blue tone.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided a video processing method, further comprising:
the method further comprises the following steps: and displaying the special effect video data based on an augmented reality mode.
According to one or more embodiments of the present disclosure, [ example thirteen ] there is provided an image processing apparatus including:
the color data extraction module is used for acquiring an initial image and extracting at least one group of color data in the initial image;
the light beam generation module is used for respectively forming corresponding light beams based on any one group of color data in the at least one group of color data;
the light beam effect generation module is used for superposing the light beams to form a light beam effect graph corresponding to the initial image;
and the light beam effect image generation module is used for superposing the light beam effect image into the initial image to form a light beam effect image.
According to one or more embodiments of the present disclosure, [ example fourteen ] there is provided a video processing apparatus comprising:
the video data acquisition module is used for acquiring video data to be projected;
the tone adjusting module is used for adjusting the tone of the video data into a projection tone;
the video processing module is used for extracting at least one group of color data in the image frames in the video data after the color adjustment, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form light beam effect graphs corresponding to the image frames, and overlapping the light beam effect graphs into the image frames to obtain special effect video data.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. An image processing method, comprising:
acquiring an initial image, and extracting at least one group of color data in the initial image;
forming corresponding light beams based on any one of the at least one set of color data respectively;
superposing the light beams to form a light beam effect graph corresponding to the initial image;
and superposing the light beam effect image into the initial image to form a light beam effect image.
2. The method of claim 1, after acquiring the initial image, further comprising:
dividing the initial image to obtain a light beam reference object, and forming a light beam reference image based on the divided light beam reference object and a preset background;
the extracting at least one set of color data in the initial image comprises:
at least one set of color data in the beam reference image is extracted.
3. The method of claim 1 or 2, wherein the at least one set of color data is color data on an extraction line in the initial image or the beam reference image, the extraction line being a row or a column of pixels in the initial image or the beam reference image.
4. The method of claim 1, wherein said forming respective light beams based on any one of said at least one set of color data comprises:
for any one set of color data in the at least one set of color data, forming a color line based on color values in the color data, and forming an initial light beam corresponding to the color data;
and gathering the initial light beam based on a virtual light source, and splitting and extracting the gathered light beam to obtain a target light beam corresponding to the color data.
5. The method of claim 4, wherein the gathering the initial beam based on the virtual source comprises:
and determining a light beam range of the gathered light beam based on the virtual light source, and extracting color data in the light beam range from the initial light beam to obtain the gathered light beam.
6. The method of claim 4, wherein performing beam splitting extraction on the converged light beam to obtain the target light beam corresponding to the color data comprises:
determining the color difference value of adjacent light beams in the gathered light beams based on a preset step length in the width direction of the light beams;
and extracting target split beams from the gathered light beams based on the color difference value to form target light beams.
7. The method of claim 1, wherein after forming the corresponding light beams based on any one of the at least one set of color data, respectively, the method further comprises:
and setting the transparency of the light beams, wherein the transparency is sequentially increased along the projection direction of the light beams.
8. The method according to claim 1, wherein the initial image is an image frame in a video, or the initial image is a frame image acquired by a camera in real time;
and respectively adding a light beam effect to image frames in the video to form a light beam effect video, or respectively adding a light beam effect to image frames acquired in real time to form a real-time light beam effect video stream.
9. A video processing method, comprising:
acquiring video data to be projected, and adjusting the tone of the video data into a projection tone;
for an image frame in the video data after the color adjustment, extracting at least one group of color data in the image frame, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form a light beam effect graph corresponding to the image frame, and overlapping the light beam effect graph to the image frame to obtain special effect video data.
10. The method of claim 9, wherein prior to adjusting the tone of the video data to the projected tone, the method further comprises:
and carrying out random dithering on the video data.
11. The method of claim 9, wherein the projected hue comprises a blue hue;
the adjusting the color tone of the video data to a projected color tone includes:
and converting the color data of the image frame in the video data into color data corresponding to the blue tone according to the conversion relation between the channel color data under the current tone and the channel color data in the blue tone.
12. The method of claim 9, further comprising:
and displaying the special effect video data based on an augmented reality mode.
13. An image processing apparatus characterized by comprising:
the color data extraction module is used for acquiring an initial image and extracting at least one group of color data in the initial image;
the light beam generation module is used for respectively forming corresponding light beams based on any one group of color data in the at least one group of color data;
the light beam effect generation module is used for superposing the light beams to form a light beam effect graph corresponding to the initial image;
and the light beam effect image generation module is used for superposing the light beam effect image into the initial image to form a light beam effect image.
14. A video processing apparatus, comprising:
the video data acquisition module is used for acquiring video data to be projected;
the tone adjusting module is used for adjusting the tone of the video data into a projection tone;
the video processing module is used for extracting at least one group of color data in the image frames in the video data after the color adjustment, respectively forming corresponding light beams based on any one group of color data in the at least one group of color data, overlapping the light beams to form light beam effect graphs corresponding to the image frames, and overlapping the light beam effect graphs into the image frames to obtain special effect video data.
15. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-8, or the video processing method of any one of claims 9-12.
16. A storage medium containing computer-executable instructions for performing the image processing method of any one of claims 1-8, or the video processing method of any one of claims 9-12 when executed by a computer processor.
CN202111592448.3A 2021-12-23 2021-12-23 Image processing method, video processing method, device, equipment and medium Pending CN114399425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111592448.3A CN114399425A (en) 2021-12-23 2021-12-23 Image processing method, video processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111592448.3A CN114399425A (en) 2021-12-23 2021-12-23 Image processing method, video processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114399425A true CN114399425A (en) 2022-04-26

Family

ID=81226895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111592448.3A Pending CN114399425A (en) 2021-12-23 2021-12-23 Image processing method, video processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114399425A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150262427A1 (en) * 2014-03-17 2015-09-17 Fujifilm Corporation Augmented reality provision system, method, and non-transitory computer readable medium
CN107743263A (en) * 2017-09-20 2018-02-27 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing
CN111260766A (en) * 2020-01-17 2020-06-09 网易(杭州)网络有限公司 Virtual light source processing method, device, medium and electronic equipment
CN112138378A (en) * 2020-09-22 2020-12-29 网易(杭州)网络有限公司 Method, device and equipment for realizing flashing effect in 2D game and storage medium
CN112562056A (en) * 2020-12-03 2021-03-26 广州博冠信息科技有限公司 Control method, device, medium and equipment for virtual light in virtual studio

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150262427A1 (en) * 2014-03-17 2015-09-17 Fujifilm Corporation Augmented reality provision system, method, and non-transitory computer readable medium
CN107743263A (en) * 2017-09-20 2018-02-27 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN110503725A (en) * 2019-08-27 2019-11-26 百度在线网络技术(北京)有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing
CN111260766A (en) * 2020-01-17 2020-06-09 网易(杭州)网络有限公司 Virtual light source processing method, device, medium and electronic equipment
CN112138378A (en) * 2020-09-22 2020-12-29 网易(杭州)网络有限公司 Method, device and equipment for realizing flashing effect in 2D game and storage medium
CN112562056A (en) * 2020-12-03 2021-03-26 广州博冠信息科技有限公司 Control method, device, medium and equipment for virtual light in virtual studio

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卜祥磊 等: "《基于GPU的医学图像快速体绘制算法》", 《中国医学物理学杂志》, vol. 26, no. 03, pages 1167 - 1171 *

Similar Documents

Publication Publication Date Title
CN112989904B (en) Method for generating style image, method, device, equipment and medium for training model
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN110062176B (en) Method and device for generating video, electronic equipment and computer readable storage medium
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN114331820A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113989173A (en) Video fusion method and device, electronic equipment and storage medium
WO2023071707A1 (en) Video image processing method and apparatus, electronic device, and storage medium
CN112380378B (en) Lyric special effect display method and device, electronic equipment and computer readable medium
CN110070495B (en) Image processing method and device and electronic equipment
CN112712487A (en) Scene video fusion method and system, electronic equipment and storage medium
CN113742025A (en) Page generation method, device, equipment and storage medium
WO2024016930A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN114782659A (en) Image processing method, device, equipment and storage medium
CN115358919A (en) Image processing method, device, equipment and storage medium
CN111369431A (en) Image processing method and device, readable medium and electronic equipment
CN114332323A (en) Particle effect rendering method, device, equipment and medium
WO2023138441A1 (en) Video generation method and apparatus, and device and storage medium
CN114399425A (en) Image processing method, video processing method, device, equipment and medium
CN115953597B (en) Image processing method, device, equipment and medium
CN110555799A (en) Method and apparatus for processing video
CN115953504A (en) Special effect processing method and device, electronic equipment and storage medium
CN109889765A (en) Method for processing video frequency, video process apparatus and conference system
CN114866706A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110148077B (en) Method for accelerating ELBP-IP core and MR intelligent glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination