CN113766258A - Live broadcast room virtual gift presentation processing method, equipment and storage medium - Google Patents

Live broadcast room virtual gift presentation processing method, equipment and storage medium Download PDF

Info

Publication number
CN113766258A
CN113766258A CN202110852951.1A CN202110852951A CN113766258A CN 113766258 A CN113766258 A CN 113766258A CN 202110852951 A CN202110852951 A CN 202110852951A CN 113766258 A CN113766258 A CN 113766258A
Authority
CN
China
Prior art keywords
pattern
filled
image
identification
virtual gift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110852951.1A
Other languages
Chinese (zh)
Other versions
CN113766258B (en
Inventor
周平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202110852951.1A priority Critical patent/CN113766258B/en
Publication of CN113766258A publication Critical patent/CN113766258A/en
Application granted granted Critical
Publication of CN113766258B publication Critical patent/CN113766258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a live broadcast room virtual gift presentation processing method, equipment and a storage medium. The live broadcast room virtual gift giving processing method comprises the following steps: responding to a gift sending instruction, and acquiring an original image; determining a filling area with a desired contour in the original image; and acquiring the identification pattern of the virtual gift to be given away, and combining and arranging the identification patterns into a target pattern matched with the filling area according to the expected outline. Through the mode, the user experience can be improved.

Description

Live broadcast room virtual gift presentation processing method, equipment and storage medium
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a method, device, and storage medium for processing presentation of a virtual gift in a live broadcast room.
Background
Due to the high-speed development of the mobile internet, the live broadcast economy becomes the current topic of fire and heat. The user reward anchor is one of the cores of the live broadcast economy, and the main reward form is giving a virtual gift. Based on the method, the live broadcast platform provides various gifts for the user, so that the user can select favorite gifts to be presented to the main broadcast based on own preference.
If a user presents a plurality of gifts at the same time, the existing processing methods are all one display and cannot present the plurality of gifts at the same time, so that the user cannot feel that the plurality of gifts are presented at the same time, and the user experience is reduced.
Therefore, how to improve the method for giving the virtual gift has very important significance for improving the user experience.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a processing method, equipment and storage medium for presenting virtual gifts in a live broadcast room, so that identification patterns of a plurality of virtual gifts can be displayed simultaneously, and user experience is improved.
In order to solve the technical problem, the application adopts a technical scheme that: a method for presenting virtual gifts in a live broadcast room is provided, and the method comprises the following steps: responding to a gift sending instruction, and acquiring an original image; determining a filling area with a desired contour in the original image; and acquiring the identification pattern of the virtual gift to be given away, and combining and arranging the identification patterns into a target pattern matched with the filling area according to the expected outline.
The beneficial effect of this application is: different from the prior art, the filling area with the expected contour is determined in the original image, and the identification patterns of the virtual gifts are combined and arranged into the target pattern matched with the filling area according to the expected contour, so that the virtual gifts can be simultaneously displayed in the expected contour, and the identification patterns of the virtual gifts can be simultaneously displayed, and the user experience is improved.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an electronic device, the device comprising: a processor, a memory, and a communication circuit, the communication circuit and the memory each coupled to the processor, the memory storing a computer program, the processor for executing the computer program to implement the virtual gift giving method described above.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer-readable storage medium storing a computer program executable by a processor to implement the virtual gift giving method described above.
The beneficial effect of this application is: different from the prior art, the filling area with the expected contour is determined in the original image, and the identification patterns of the virtual gifts are combined and arranged into the target pattern matched with the filling area according to the expected contour, so that the virtual gifts can be simultaneously displayed in the expected contour, and the identification patterns of the virtual gifts can be simultaneously displayed, and the user experience is improved.
Drawings
FIG. 1 is a first flowchart of an embodiment of a method for processing a virtual gift presentation in a live broadcast room of the present application;
FIG. 2 is a diagram illustrating related images in an embodiment of a live broadcast room virtual gift-giving processing method of the present application;
FIG. 3 is a second flowchart of an embodiment of a method for processing a virtual gift presentation in a live broadcast room of the present application;
FIG. 4 is a schematic diagram of a filling identification pattern according to an embodiment of a method for presenting a virtual gift in a live broadcast room of the present application;
FIG. 5 is a schematic diagram of a target pattern in an embodiment of a live broadcast room virtual gift-rendering processing method of the present application;
FIG. 6 is a block diagram of an embodiment of a live broadcast room virtual gift-giving processing apparatus of the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the present application, the device for executing the live broadcast virtual gift giving processing method may be a network server, or may be a mobile terminal, where the mobile terminal is, for example, a mobile phone, a tablet computer, a notebook computer, or the like.
Referring to fig. 1, fig. 1 is a first flowchart illustrating a virtual gift-giving processing method of a live broadcast room according to an embodiment of the present application. In this embodiment, the method comprises the following steps:
step S11: in response to the gift giving instruction, the original image is acquired.
In the live broadcast room, an option of presenting the virtual gift is configured, and the user can present the virtual gift in the live broadcast room. For example, the viewing user may choose to present virtual gifts to the anchor, to present virtual gifts to each other between the anchors, or to present virtual gifts to the user by the anchor. When the user performs the operation of presenting the virtual gift, the user may be considered to trigger the gift giving instruction. Correspondingly, an execution main body executing the live broadcast room virtual gift giving processing method can respond to a gift giving instruction triggered by a user and execute the operation of obtaining an original image.
The original image may be uploaded by a user or determined by the network server, and it can be understood that the obtaining mode of the original image is not limited.
Step S12: a filled area with a desired contour is determined in the original image.
After the original image is acquired, a fill area having a desired outline may be determined in the original image for filling the identification pattern of the virtual gift. The shape of the desired profile may be set as desired and is not limited herein.
In one embodiment, the original image may be image processed to obtain a contour pattern of the target object in the original image. In this case, the contour displayed by the contour pattern may be set as the expected contour, and the region where the contour pattern is located may be set as the filling region. The target object is determined in the original image, and the method can be realized by using a general image detection segmentation method. In addition, the method for performing image processing on the original image may be a general image processing method, for example, the contour pattern of the target object in the original image may be determined by a method of generating a mask, which is not described herein again. Therefore, by determining the outline pattern of the target object in the original image, the filled region in which the expected outline is the outline pattern can be obtained.
In one embodiment, the color of the target object may be processed into a first color, and the color of the region outside the target object may be processed into a second color different from the first color to obtain the contour pattern of the target object. The first color is, for example, white and the second color is, for example, black. The method for adjusting the color of the original image may be a general image processing method such as image binarization, and will not be described herein again. Therefore, by adjusting the color of the target object and the color of the region outside the target object in the original image, the target object can be distinguished from other regions in color, so that the contour pattern of the target object can be obtained, and then the desired contour is determined according to the contour pattern of the target object.
Step S13: and acquiring the identification pattern of the virtual gift to be given away, and combining and arranging the identification patterns into a target pattern matched with the filling area according to the expected outline.
The identification pattern of the virtual gift may be a pattern displayed on the display device. For example, the identification pattern may be a rocket, a shoe-shaped gold ingot, or the like. The virtual gift to be presented may be a virtual gift selected by the user to be presented to the host, or may be determined in other ways.
The plurality of identification patterns may include different kinds of virtual gifts or may include a plurality of same kinds of virtual gifts. For example, the plurality of identification patterns may be rockets, shoe-shaped gold ingots, flowers, etc., or a plurality of rockets, etc.
The method for combining and arranging the plurality of identification patterns according to the desired contour may be to arrange and combine the identification patterns according to a certain direction, or the system may randomly combine and arrange the identification patterns according to a random algorithm. It is to be understood that the method of the combination arrangement is not limited and may be set as required.
In one embodiment, the number of gifts to be gifted and the identification patterns may be obtained, and then the corresponding number of identification patterns may be arranged in a desired contour combination into a target pattern matching the fill area.
Referring to fig. 2, fig. 2 is a schematic diagram of related images in an embodiment of a live broadcast room virtual gift-giving processing method according to the present application. The image 21 is an original image, wherein the region 211 is a filled region, and the contour of the filled region 211 is a desired contour. The image 22 is an image in which the target pattern 221 is filled in the filling area. The target pattern 221 is a pattern in which a plurality of identification patterns are combined and arranged to match the filling area in a desired outline.
Therefore, the filling area with the expected contour is determined in the original image, and the identification patterns of the virtual gifts are combined and arranged into the target pattern matched with the filling area according to the expected contour, so that the virtual gifts can be simultaneously displayed in the expected contour, and therefore the identification patterns of the virtual gifts can be simultaneously displayed, and the user experience is improved.
Referring to fig. 3, fig. 3 is a second flow chart of the embodiment of the method for processing a virtual gift present in a live broadcast room. In the present embodiment, the step of "arranging a plurality of identification patterns in a desired contour combination to form a target pattern matching the filling area" may specifically include steps S131 to S133.
Step S131: and generating a pixel matrix which corresponds to each pixel of the original image and is used for identifying the filling area.
The pixel matrix may be a matrix obtained by processing the pixel values of the pixels in the original image. In one embodiment, the pixel values in the pixel matrix are gray scale values. In other embodiments, the pixel value in the pixel matrix may also be a value obtained by processing the original pixel value of each pixel point of the original image by other methods.
In one embodiment, the step of generating a pixel matrix corresponding to each pixel of the original image and used for identifying the filled region, corresponding to the step of processing the color of the target object to a first color and processing the color of the region outside the target object to a second color different from the first color to obtain the outline pattern of the target object mentioned in the above embodiment, may specifically include: step S1311 and step S1312.
Step S1311: the first color is identified as a first numerical value and the second color is identified as a second numerical value.
The identifying of the first color as the first numerical value or the identifying of the second color as the second numerical value may specifically be adjusting a pixel value of a pixel point belonging to the first color to the first numerical value, and adjusting a pixel value of a pixel point belonging to the second color to the second numerical value. The first and second values may be determined as desired. In one embodiment, the first value is, for example, 0 and the second value is, for example, 1.
Step S1312: and generating a corresponding pixel matrix through the first numerical value and the second numerical value according to the arrangement sequence of the pixels of the processed original image.
The processed original image is the image with the first color identified as the first numerical value and the second color identified as the second numerical value. Based on the arrangement order of the pixels of the processed original image, a pixel matrix is generated.
Thus, by identifying the first color as a first numerical value and the second color as a second numerical value, a pixel matrix can be derived from the first numerical value and the second numerical value.
Step S132: and acquiring a position to be filled in the filling area by using the pixel matrix.
After the pixel matrix is obtained, the position to be filled in the filling area can be determined according to the pixel value in the filling area in the pixel matrix. For example, an area in which the identification pattern of the virtual gift can be filled may be determined as a position to be filled, within the filling area, according to the size of the identification pattern of the virtual gift.
In one embodiment, the pixel matrix may be computed by an integral image algorithm to derive the location to be filled. In the integral image, the value of any point (x, y) is the sum of the pixel values of all points in a rectangular region formed from the upper left corner of the image to that point.
In step S131, the filling area has been identified by the pixel matrix, and at this time, the integrated value of a certain area may be calculated by an integration algorithm, and then the area where the integrated value satisfies the requirement is taken as the position to be filled. Therefore, by using the integration algorithm, the position to be filled can be found by calculating the integration value.
In one embodiment, the calculation of the pixel matrix by the integral image algorithm to obtain the position to be filled may specifically include step S1321 and step S1322.
Step S1321: an integral image of the pixel matrix is calculated.
In the integral image, the integral image corresponding to the filling area is defined as a local integral image. The integral image can be considered as an integral image calculated using pixel values in a pixel matrix. The method for obtaining the integral map may be a calculation method commonly used in the art, and is not described herein.
In a specific embodiment, the pixel value of the pixel point corresponding to the filled region in the pixel matrix is 0, and the pixel value of the pixel point corresponding to the unfilled region is 1, at this time, the integral image can be obtained based on the pixel matrix.
Step S1322: and calculating a local integral image corresponding to the filling area based on the integral image to obtain a position to be filled.
The local integral image corresponding to the filling area is calculated based on the integral image, the integral value of a certain area in the filling area corresponding to the filling area in the integral image can be calculated, and then whether the area is the position to be filled can be determined by judging whether the calculated integral value meets the requirement. The size of the filling position is equal to the original size of the identification pattern, and may be larger or smaller than the original size of the identification pattern.
In one embodiment, the integral value of a certain area may be calculated using formula (1).
integral(x2-x1,y2-y1)=integral(x2,y2)-integral(x1-1,y2)-integral(x2,y1-1)+integral(x1-1,y1-1) (1)
Wherein, integral (x2-x1, y2-y1) represents the integral value of the (x2-x1, y2-y1) area, integral (x2, y2) represents the integral value of the point (x2, y2), integral (x1-1, y2) represents the integral value of the point (x1-1, y2), integral (x2, y1-1) represents the integral value of the point (x2, y1-1), and integral (x1-1, y1-1) represents the integral value of the point (x1-1, y 1-1).
Therefore, by the above formula (1), the integral value of a certain area in the filling area corresponding to the filling area can be calculated, and then the position to be filled can be determined by judging whether the integral value meets the requirement.
The integral image in which the pixel value of the pixel corresponding to the filling region in the pixel matrix is 0 and the pixel value of the pixel corresponding to the non-filling region is 1 may be provided with a sliding window of a certain size, for example, a sliding window of 50 × 50, and the size of the sliding window may be set as required to obtain the position to be filled. Specifically, the following steps 1 to 4 may be included.
Step 1: and traversing the integral image by using the sliding window to obtain an integral value of the corresponding area of each sliding window.
When the integral image is traversed by the sliding window, the integral value of the area corresponding to each sliding window in the traversing process can be obtained, and therefore the integral value of a certain area in the filling area corresponding to the filling area can be obtained. Specifically, the integral value of the corresponding area of each sliding window can be calculated by using the above formula (1).
Step 2: and judging whether the integral value corresponding to the sliding window meets the preset requirement or not.
The preset requirement is, for example, that the integrated value is 0. In other embodiments, the preset requirement may be that the integrated value is less than a preset threshold value.
It can be understood that if the area corresponding to the sliding window is a filled area, it means that the pixel values of all the pixels in the area corresponding to the sliding window are 0, and therefore it can be determined that the integral value of the area is also 0. Therefore, the sliding window corresponding area can be determined to be used for filling the identification pattern of the virtual gift, and therefore the sliding window corresponding area can be determined to be a position to be filled.
The mathematical expression of the integrated value of the sliding window corresponding region is as follows:
Figure BDA0003183131010000081
wherein (x2-x1, y2-y1) is the coordinate of the upper left corner of the sliding window area, (x2, y2) is the coordinate of the lower right corner of the sliding window area, and integral (x2-x1, y2-y1) represents the integral value of the area (x2-x1, y2-y 1).
And step 3: and if the integral value corresponding to the sliding window meets the preset requirement, determining the area corresponding to the sliding window as the position to be filled.
And 4, step 4: and if the integral value corresponding to the sliding window does not meet the preset requirement, determining that the area corresponding to the sliding window is not the position to be filled.
In one embodiment, the positions to be filled may include a first position to be filled that matches the original size of the logo pattern, and a second position to be filled that is smaller than the original size of the logo pattern. The first position to be filled matches the original size of the logo pattern, i.e. it means that the first position to be filled may be larger than or equal to the original size of the logo pattern.
When the first position to be filled is obtained, the local integral image corresponding to the filling area may be calculated based on the volume fraction image in the filling area corresponding to the local integral image, and the first position to be filled matching the original size of the identification pattern may be obtained. The method for obtaining the first position to be filled may specifically refer to steps 1 to 4, and at this time, the size of the sliding window may be set to be the same as the original size of the identification pattern. For example. The original size of the logo pattern is 80 x 80, and the size of the slider may be set to 80 x 80.
When the second position to be filled is obtained, the local integral image corresponding to the filling area may be calculated based on the integral image in the remaining area of the filling area corresponding to the local integral image, and the second position to be filled smaller than the original size of the identification pattern may be obtained. The remaining areas of the filling area, i.e. the areas in the filling area other than the position that has been determined to be the first position to be filled. The method for acquiring the second position to be filled may specifically refer to steps 1 to 4 described above. In one example, the original size of the logo pattern is 80 x 80, and the size of the sliding window may be 50 x 50.
Therefore, by setting the first position to be filled and the second position to be filled, it may be that in the filling area, identification patterns of different sizes are filled, thereby enabling the filling area to be filled with more identification patterns.
Step S133: and forming a target pattern by the plurality of identification patterns in a combined arrangement mode according to the positions to be filled.
Through step S132, a number of positions to be filled may be determined within the filling area. At this time, the target pattern may be formed in a combined arrangement according to the positions to be filled. For example, if the number of the positions to be filled is 50, the target patterns may be formed by respectively arranging 50 identification patterns in a combined manner according to the positions to be filled.
In one embodiment, the "forming the plurality of identification patterns into the target pattern in a combined arrangement according to the positions to be filled respectively" may specifically include steps 1331 to 1333, corresponding to the determining of the first position to be filled and the second position to be filled in the filling area.
Step 1331: and forming the target patterns by the plurality of identification patterns in a combined arrangement mode according to the first positions to be filled.
Since the size of the first position to be filled is the same as the original size of the identification pattern, the target pattern can be formed by directly and respectively arranging the plurality of identification patterns in a combined manner according to the first position to be filled.
Step 1332: and adjusting the size of the identification pattern, so that the adjusted size of the identification pattern is matched with the size of the second position to be filled.
The size of the adjusted identification pattern matches the size of the second position to be filled, i.e. it means that the size of the second position to be filled may be greater than or equal to the size of the adjusted identification pattern.
Because the size of the second position to be filled is smaller than the original size of the identification pattern, the size of the identification pattern can be adjusted first, so that the size of the identification pattern after the adjustment is the same as the size of the second position to be filled.
Step 1333: and forming target patterns by the adjusted identification patterns in a combined arrangement mode according to the second positions to be filled respectively.
The size of the adjusted identification pattern is the same as that of the second position to be filled, so that the target patterns can be formed by the adjusted identification pattern in a combined arrangement mode according to the second position to be filled respectively.
It will be appreciated that in some embodiments, the second to-be-filled location may comprise a variety of different dimensions. For example, the second site to be filled of one size may be 50 × 50, and the second site to be filled of another size may be 30 × 30. Specifically, steps 1332 and 1333 may be repeatedly performed to obtain a plurality of different sizes of second waiting positions to be filled.
Therefore, the identification pattern is filled in the first position to be filled and the first position to be filled, so that the target pattern can contain more identification patterns, the image information of the target pattern can be richer, and the user experience is further improved.
It should be understood that the execution order of the above steps 1331 and 1332 is not limited, that is, step 1331 may be executed first, step 1332 and step 1333 may be executed first, or step 1331 and step 1332 may be executed simultaneously.
In one embodiment, when the identification patterns of the virtual gifts include different types of virtual gifts, the corresponding to-be-filled position may be obtained for each type of virtual gift, or the identification sizes of the identification patterns of all the virtual gifts may be adjusted to the same size, and then the obtained to-be-filled position is performed.
In one embodiment, the step of arranging the identification patterns to the target patterns matching the filling areas according to the desired contour combinations specifically includes the steps S134 and S135.
Step S134: and acquiring a background image.
The background image may be an image uploaded by the user or an image determined by the service system, and it is understood that the obtaining manner of the background image is not limited.
Step S135: and forming a target pattern by combining and arranging the plurality of identification patterns on the background image according to the expected outline.
After obtaining the background image, the target pattern may be further formed in a manner of arranging a plurality of identification patterns on the background image in a combined manner according to a desired contour. Specifically, the desired contour may be superimposed on the background map, and the desired contour may be determined on the background map. Then, the plurality of marker patterns are arranged in a combined manner on the background image according to the expected outline to form the target pattern.
Therefore, the target patterns are formed on the background image in a combined arrangement mode by the plurality of identification patterns according to the expected outlines in the background image, so that the target patterns can be displayed on the background image, the display mode of the target patterns is enriched, and the user experience is improved.
In one embodiment, after the step of forming the target pattern by arranging the plurality of identification patterns on the background image in a combined manner according to the desired contour, "the following steps may be further performed: and performing transparent processing on the area of the background image except the target pattern to present the target pattern. Therefore, the region except the target pattern in the background image is subjected to transparent processing, so that the background image only displays the target pattern, the target pattern is more prominent, and the user experience is further improved.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating filling of a logo pattern according to an embodiment of a method for presenting a virtual gift in a live broadcast room. In fig. 4, a region 401 in fig. a is a first position to be filled, 402 is a mark pattern. The area 403 in the diagram b is the second position to be filled, 404 is the scaled identification pattern.
Referring to fig. 5, fig. 5 is a schematic diagram of a target pattern in an embodiment of a live broadcast room virtual gift-giving processing method of the present application. In fig. 5, the pattern 501 is the target pattern. In addition, in fig. 5, the region other than the target pattern in the background image has been subjected to the transparency processing, and therefore only the target pattern can be seen in fig. 5.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of the virtual gift-giving processing apparatus of the present application. The virtual gift giving processing apparatus 60 includes an acquisition module 61, a determination module 62, and a fill module 63. The obtaining module 61 is used for responding to a gift sending instruction and obtaining an original image; the determination module 62 is configured to determine a filling area having a desired contour in the original image; the filling module 63 is configured to obtain an identification pattern of a virtual gift to be gifted, and arrange a plurality of identification patterns in a desired contour combination into a target pattern matching the filling area.
The filling module 63 is configured to obtain an identification pattern of a virtual gift to be presented, and combine and arrange a plurality of identification patterns according to a desired contour to form a target pattern matching the filling area, and specifically includes: generating a pixel matrix which corresponds to each pixel of the original image and is used for identifying the filling area; acquiring a position to be filled in the filling area by using the pixel matrix; and forming a target pattern by the plurality of identification patterns in a combined arrangement mode according to the positions to be filled.
The filling module 63 is configured to obtain a position to be filled corresponding to the filling area by using the pixel matrix, and includes: calculating the pixel matrix through an integral image algorithm to obtain a position to be filled;
the filling module 63 is configured to calculate the pixel matrix through an integral image algorithm to obtain a position to be filled, and includes: calculating an integral image of the pixel matrix; and calculating a local integral image corresponding to the filling area based on the integral image to obtain a position to be filled.
The filling module 63 is configured to obtain a position to be filled based on the integral image and the local integral image corresponding to the filling area, and includes: in a filling area corresponding to the local integral image, calculating the local integral image corresponding to the filling area based on the integral image, and acquiring a first position to be filled matched with the original size of the identification pattern; in the residual area of the filling area corresponding to the local integral image, acquiring a second position to be filled, which is smaller than the original size of the identification pattern, based on the integral image and the local integral image corresponding to the filling area; the filling module 63 is configured to form a target pattern by combining and arranging a plurality of identification patterns according to positions to be filled, and includes: forming a target pattern by the plurality of identification patterns in a combined arrangement mode according to the first position to be filled; adjusting the size of the identification pattern to enable the size of the adjusted identification pattern to be matched with the size of the second position to be filled; and forming target patterns by the adjusted identification patterns in a combined arrangement mode according to the second positions to be filled respectively.
The determining module 62 is configured to determine a filling area with a desired contour in the original image, and includes: and performing image processing on the original image to obtain a contour pattern of the target object in the original image, wherein the contour displayed by the contour pattern is used as an expected contour, and the area where the contour pattern is located is used as a filling area.
The determining module 62 is configured to perform image processing on the original image to obtain a contour image of the target object in the original image, and includes: the color of the target object is processed into a first color, and the color of the region outside the target object is processed into a second color different from the first color to obtain the contour pattern of the target object.
The filling module 63 is configured to generate a pixel matrix corresponding to each pixel of the original image and used for identifying a filling area, and includes: identifying the first color as a first numerical value and the second color as a second numerical value; and generating a pixel matrix through the first numerical value and the second numerical value according to the arrangement sequence of the pixels of the processed original image.
The filling module 63 is configured to combine and arrange the identification patterns into target patterns matching the filling areas according to the desired contour, and includes: acquiring a background picture; and forming a target pattern by combining and arranging the plurality of identification patterns on the background image according to the expected outline.
The virtual gift-giving processing device 60 further includes a background image processing module, and after the filling module 63 is configured to form the target patterns on the background image in a combined arrangement manner according to the desired outlines of the plurality of identification patterns, the background image processing module is configured to perform transparent processing on the areas of the background image except the target patterns to present the target patterns.
The filling module 63 is configured to obtain an identification pattern of a gift to be given, and combine and arrange a plurality of identification patterns according to a desired contour into a target pattern matching a filling area, and includes: the number of gifts to be given and the identification patterns are obtained, and the corresponding number of identification patterns are combined and arranged into a target pattern matched with the filling area according to the expected outline.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 comprises a memory 701 and a processor 702 coupled to each other, and the processor 702 is configured to execute program instructions stored in the memory 701 to implement the steps of any of the above-described embodiments of the live air virtual gift giving processing method. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 702 is configured to control itself and the memory 701 to implement the steps of any of the live air virtual gift giving processing method embodiments described above. Processor 702 may also be referred to as a CPU (Central Processing Unit). The processor 702 may be an integrated circuit chip having signal processing capabilities. The Processor 702 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 702 may be collectively implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 81 executable by the processor, the program instructions 81 being for implementing the steps of any of the live room virtual gift-giving processing method embodiments described above.
According to the scheme, the filling area with the expected outline is determined in the original image, the identification patterns of the virtual gifts are combined and arranged into the target pattern matched with the filling area according to the expected outline, so that the virtual gifts can be displayed in the expected outline at the same time, the identification patterns of the virtual gifts are displayed at the same time, and the user experience is improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A method for presenting virtual gifts in a live broadcast room is characterized by comprising the following steps:
responding to a gift sending instruction, and acquiring an original image;
determining a filled region having a desired contour in the original image;
and acquiring an identification pattern of the virtual gift to be given away, and arranging a plurality of identification patterns into a target pattern matched with the filling area according to the expected contour combination.
2. The live broadcast room virtual gift-giving processing method of claim 1, wherein said arranging the plurality of identification patterns into a target pattern matching the fill area according to the desired contour combination comprises:
generating a pixel matrix which corresponds to each pixel of the original image and is used for identifying the filling area;
acquiring a position to be filled in the filling area by using the pixel matrix;
and forming the target patterns by the plurality of identification patterns according to the positions to be filled in a combined arrangement mode.
3. The live broadcast room virtual gift giving processing method of claim 2, wherein the obtaining of the position to be filled corresponding to the filling area using the pixel matrix comprises:
and calculating the pixel matrix through an integral image algorithm to obtain the position to be filled.
4. The live room virtual gift giving processing method of claim 3, wherein said calculating the pixel matrix by integral image algorithm to derive the position to be filled comprises:
calculating an integral image of the pixel matrix;
and calculating a local integral image corresponding to the filling area based on the integral image to obtain the position to be filled.
5. The live broadcast room virtual gift giving processing method of claim 4, wherein the calculating a local integral image corresponding to the filling area based on the integral image to obtain the position to be filled comprises:
in a filling area corresponding to the local integral image, calculating the local integral image corresponding to the filling area based on the integral image, and acquiring a first position to be filled matched with the original size of the identification pattern;
in the residual area of the filling area corresponding to the local integral image, calculating the local integral image corresponding to the filling area based on the integral image, and acquiring a second position to be filled, which is smaller than the original size of the identification pattern;
the forming of the target pattern by the plurality of identification patterns according to the positions to be filled in a combined arrangement mode respectively comprises: forming the target patterns by the plurality of identification patterns in a combined arrangement mode according to the first positions to be filled respectively;
adjusting the size of the identification pattern to enable the adjusted size of the identification pattern to be matched with the size of the second position to be filled;
and forming the target pattern by the adjusted identification pattern in a combined arrangement mode according to a second position to be filled.
6. The live-air virtual gift-rendering processing method of claim 2, wherein said determining a fill area having an expected contour in the original image comprises:
and performing image processing on the original image to obtain a contour pattern of a target object in the original image, wherein the contour displayed by the contour pattern is used as the expected contour, and the area where the contour pattern is located is used as the filling area.
7. The live-air virtual gift-giving processing method of claim 6, wherein said image-processing the original image to obtain a contour image of a target object in the original image comprises:
processing the color of the target object into a first color, and processing the color of the region outside the target object into a second color different from the first color to obtain the contour pattern of the target object.
8. The live-air virtual gift-giving processing method of claim 7, wherein the generating a pixel matrix corresponding to each pixel of the original image and used for identifying the stuffing area comprises:
identifying the first color as a first numerical value and the second color as a second numerical value;
and generating the pixel matrix through the first numerical value and the second numerical value according to the arrangement sequence of the pixels of the processed original image.
9. The live broadcast room virtual gift-giving processing method of claim 1, wherein said arranging the identification pattern in accordance with the desired contour combination into a target pattern matching the fill area comprises:
acquiring a background picture;
and forming the target pattern by arranging the plurality of identification patterns on the background pattern in a combined manner according to the expected outline.
10. The live broadcast virtual gift-giving processing method of claim 9, wherein after the forming the target patterns by arranging the plurality of identification patterns on the background map in a combined manner in accordance with the desired contour, comprising:
and carrying out transparent processing on the area of the background image except the target pattern so as to present the target pattern.
11. The live broadcast room virtual gift-giving processing method of claim 1, wherein the obtaining of an identification pattern of a gift to be given, arranging a plurality of the identification patterns in combination with the desired contour into a target pattern matching the filling area, comprises: obtaining the number of gifts to be given and identification patterns, and arranging the corresponding number of identification patterns into a target pattern matched with the filling area according to the expected contour combination.
12. An electronic device, comprising: a processor, a memory, and a communication circuit, the communication circuit and the memory each coupled to the processor, the memory storing a computer program for execution by the processor to implement the live air virtual gift giving processing method of any of claims 1-11.
13. A computer-readable storage medium, in which a computer program is stored, the computer program being executable by a processor to implement the live broadcast room virtual gift giving processing method of any one of claims 1 to 11.
CN202110852951.1A 2021-07-27 2021-07-27 Live broadcast room virtual gift presentation processing method and equipment and storage medium Active CN113766258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852951.1A CN113766258B (en) 2021-07-27 2021-07-27 Live broadcast room virtual gift presentation processing method and equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852951.1A CN113766258B (en) 2021-07-27 2021-07-27 Live broadcast room virtual gift presentation processing method and equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113766258A true CN113766258A (en) 2021-12-07
CN113766258B CN113766258B (en) 2023-04-11

Family

ID=78788014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852951.1A Active CN113766258B (en) 2021-07-27 2021-07-27 Live broadcast room virtual gift presentation processing method and equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113766258B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114458979A (en) * 2022-02-10 2022-05-10 珠海读书郎软件科技有限公司 Intelligent table lamp for assisting paging identification, identification method and storage medium thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7102649B2 (en) * 1999-05-25 2006-09-05 Nippon Telegraph And Telephone Corporation Image filling method, apparatus and computer readable medium for reducing filling process in processing animation
US20110124390A1 (en) * 2009-05-12 2011-05-26 Richard Wilen Commercial Game System and Method
CN105844533A (en) * 2016-03-31 2016-08-10 腾讯科技(深圳)有限公司 Information transmission method and device
CN108235102A (en) * 2017-12-29 2018-06-29 广州酷狗计算机科技有限公司 Method for processing business, device and storage medium
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN111399729A (en) * 2020-03-10 2020-07-10 北京字节跳动网络技术有限公司 Image drawing method and device, readable medium and electronic equipment
CN111784418A (en) * 2020-07-27 2020-10-16 网易(杭州)网络有限公司 Display control method and device for live broadcast room, computer medium and electronic equipment
CN112449205A (en) * 2019-09-03 2021-03-05 腾讯科技(深圳)有限公司 Information interaction method and device, terminal equipment and storage medium
CN112492336A (en) * 2020-11-20 2021-03-12 完美世界(北京)软件科技发展有限公司 Gift sending method, device, electronic equipment and readable medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7102649B2 (en) * 1999-05-25 2006-09-05 Nippon Telegraph And Telephone Corporation Image filling method, apparatus and computer readable medium for reducing filling process in processing animation
US20110124390A1 (en) * 2009-05-12 2011-05-26 Richard Wilen Commercial Game System and Method
CN105844533A (en) * 2016-03-31 2016-08-10 腾讯科技(深圳)有限公司 Information transmission method and device
CN108235102A (en) * 2017-12-29 2018-06-29 广州酷狗计算机科技有限公司 Method for processing business, device and storage medium
CN112449205A (en) * 2019-09-03 2021-03-05 腾讯科技(深圳)有限公司 Information interaction method and device, terminal equipment and storage medium
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN111399729A (en) * 2020-03-10 2020-07-10 北京字节跳动网络技术有限公司 Image drawing method and device, readable medium and electronic equipment
CN111784418A (en) * 2020-07-27 2020-10-16 网易(杭州)网络有限公司 Display control method and device for live broadcast room, computer medium and electronic equipment
CN112492336A (en) * 2020-11-20 2021-03-12 完美世界(北京)软件科技发展有限公司 Gift sending method, device, electronic equipment and readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114458979A (en) * 2022-02-10 2022-05-10 珠海读书郎软件科技有限公司 Intelligent table lamp for assisting paging identification, identification method and storage medium thereof

Also Published As

Publication number Publication date
CN113766258B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
EP3454250B1 (en) Facial image processing method and apparatus and storage medium
CN107154030B (en) Image processing method and device, electronic equipment and storage medium
CN109064390B (en) Image processing method, image processing device and mobile terminal
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
CN107610202B (en) Face image replacement method, device and storage medium
CN107507217B (en) Method and device for making certificate photo and storage medium
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
CN110009712B (en) Image-text typesetting method and related device thereof
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
CN109348731A (en) A kind of method and device of images match
CN108021863B (en) Electronic device, age classification method based on image and storage medium
CN114520894B (en) Projection area determining method and device, projection equipment and readable storage medium
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN113516696A (en) Video advertisement implanting method and device, electronic equipment and storage medium
CN113766258B (en) Live broadcast room virtual gift presentation processing method and equipment and storage medium
CN111461070B (en) Text recognition method, device, electronic equipment and storage medium
CN111476735A (en) Face image processing method and device, computer equipment and readable storage medium
US20190130600A1 (en) Detection Method and Device Thereof
CN111881846B (en) Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program
CN111540060B (en) Display calibration method and device of augmented reality equipment and electronic equipment
CN113486941B (en) Live image training sample generation method, model training method and electronic equipment
CN113744364B (en) Image processing method and device
CN115421639A (en) Panorama display method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant