CN112235633A - Output effect adjusting method, device, equipment and computer readable storage medium - Google Patents

Output effect adjusting method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112235633A
CN112235633A CN202011109704.4A CN202011109704A CN112235633A CN 112235633 A CN112235633 A CN 112235633A CN 202011109704 A CN202011109704 A CN 202011109704A CN 112235633 A CN112235633 A CN 112235633A
Authority
CN
China
Prior art keywords
output
output effect
effect
label
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011109704.4A
Other languages
Chinese (zh)
Inventor
佟林府
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Priority to CN202011109704.4A priority Critical patent/CN112235633A/en
Publication of CN112235633A publication Critical patent/CN112235633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an output effect adjusting method, an output effect adjusting device, output effect adjusting equipment and a computer readable storage medium, wherein the method comprises the steps of obtaining an output effect label corresponding to an output object; acquiring a target output parameter matched with the output effect label; and outputting the output object according to the target output parameter. The invention realizes that different output parameters are adopted to output the output objects aiming at different types and different types of output objects, thereby realizing that the different types of output objects can have the best output effect and improving the impression experience of users.

Description

Output effect adjusting method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for adjusting an output effect.
Background
With the continuous development and popularization of the functions of electronic devices, listening to audio, watching video and images by using electronic devices are very common applications. Due to the continuous maturity of various technologies of electronic devices, people have higher and higher requirements on the picture output effect of videos and images and the sound effect of audio. However, since the types and styles of output objects such as video and audio are various, it is difficult for the current electronic devices to satisfy the experience of different output objects by using the same output parameters.
Disclosure of Invention
The invention mainly aims to provide an output effect adjusting method, an output effect adjusting device, output effect adjusting equipment and a computer readable storage medium, and aims to solve the technical problem that the current electronic equipment is difficult to meet the visual experience of different output objects by adopting the same output parameters.
In order to achieve the above object, the present invention provides an output effect adjusting method, including the steps of:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Optionally, the method is applied to a client, the client establishes a communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Optionally, the step of obtaining the target output parameter matched with the output effect tag includes:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Optionally, before the step of obtaining the output effect tag corresponding to the output object, the method further includes:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Optionally, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Optionally, the step of performing image analysis on the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Optionally, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
To achieve the above object, the present invention also provides an output effect adjusting apparatus, comprising:
the first acquisition module is used for acquiring an output effect label corresponding to an output object;
the second acquisition module is used for acquiring target output parameters matched with the output effect labels;
an output module for outputting the output object according to the target output parameter
To achieve the above object, the present invention also provides an output effect adjustment apparatus including: a memory, a processor and an output effect adjustment program stored on the memory and executable on the processor, the output effect adjustment program when executed by the processor implementing the steps of the output effect adjustment method as described above.
Further, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon an output effect adjustment program which, when executed by a processor, implements the steps of the output effect adjustment method as described above.
According to the method and the device, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects are output by adopting different output parameters aiming at different types and types of output objects, the output objects of different types can have the best output effect, and the impression experience of users is improved. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compared with the scheme, the output object is labeled in the invention, so that when the output object is required to be output, only the corresponding output effect label is required to be obtained, the output parameter is determined according to the label, the object is output according to the output parameter, a better output effect can be achieved, no extra requirement is required on hardware calculation force, no extra hardware cost is required, and the application range is wider.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an output effect adjustment method according to the present invention;
FIG. 3 is a schematic view of a video display effect adjustment process according to various embodiments of the present invention;
FIG. 4 is a functional block diagram of an output effect adjustment apparatus according to a preferred embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that the output effect adjusting device in the embodiment of the present invention may be a smart phone, a personal computer, a server, and the like, and the device may be deployed in a robot, which is not limited herein.
As shown in fig. 1, the output effect adjusting apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the apparatus shown in fig. 1 does not constitute a limitation of the output effect adjustment apparatus and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an output effect adjustment program. The operating system is a program that manages and controls the hardware and software resources of the device, supporting the execution of the output effect adjustment program as well as other software or programs. In the device shown in fig. 1, the user interface 1003 is mainly used for data communication with a client; the network interface 1004 is mainly used for establishing communication connection with a server; and the processor 1001 may be configured to call the output effect adjustment program stored in the memory 1005 and perform the following operations:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
Further, the method is applied to a client, the client establishes communication connection with a server, and the step of obtaining an output effect tag corresponding to an output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
Further, the step of obtaining the target output parameter matching with the output effect tag comprises:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, before the step of obtaining the output effect tag corresponding to the output object, the processor 1001 may be further configured to call the output effect adjustment program stored in the memory 1005, and perform the following operations:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
Further, when the output object is a video, the step of obtaining an output effect tag corresponding to the output object includes:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
Further, the step of analyzing the image of the picture frame to obtain an analysis result, and generating an output effect tag of the video according to the analysis result includes:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
Further, the step of outputting the output object according to the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
Based on the above structure, various embodiments of the output effect adjustment method are proposed.
Referring to fig. 2, fig. 2 is a flowchart illustrating an output effect adjusting method according to a first embodiment of the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The execution subject of each embodiment of the output effect adjustment method of the present invention may be a device such as a smart phone, a personal computer, and a server, and for convenience of description, the following embodiments use a client as the execution subject to be explained. In this embodiment, the output effect adjusting method includes:
step S10, acquiring an output effect label corresponding to the output object;
in the present embodiment, the output object may be video, audio, or image; outputting the video and the image means outputting the video and the image to a display device connected with the client, and displaying the content of the video and the image in the display device; outputting the audio refers to outputting the audio to a sound output device such as a sound box, a loudspeaker or an earphone connected to the client, and playing the audio content in the sound output device.
The output object may be stored locally by the client, or may be an output object such as a network video or an image acquired from the server.
When the client needs to output the output object, the client can first obtain the output effect tag corresponding to the output object. Various output effect tags are predefined, and the output effects that should be presented for output objects of different kinds of tags are different. It should be noted that, for video and image, the output effect label may be referred to as a display effect label, and for audio, may be referred to as a sound effect label.
For example, the display effects for a drama and a movie should be different, so for an output object such as a video, two display effect tags of "drama" and "movie" can be defined; some video frames are brighter, and some video frames are darker, so the corresponding display effect should be different, so for the output objects such as video, the "bright" display effect label can also be defined. Also, for example, the sound effects for the lyric music and the rock music may be different, so that for output objects such as audio, sound effect labels may be defined that relate to the type of music, such as "lyric" and "rock".
The output effect label can be marked on the output object in advance in a manual labeling mode or an automatic labeling mode. The automatic labeling can be performed by analyzing the file name corresponding to the output object, or analyzing the content data of the output object and labeling according to the analysis result. For example, when the output object is a video, the file name corresponding to the video generally identifies the type of the video, for example, whether the video is a tv show or a movie, and by text matching of the file names, it can be determined whether to play a display effect label of "tv show" or a display effect label of "movie". For another example, when the output object is an audio, the audio waveform of the audio data may be analyzed to determine a music type, and then a sound effect tag corresponding to the music type is marked, for example, the amplitude and the frequency of the audio waveform may be analyzed, and if the amplitude reaches a certain amplitude and the frequency reaches a certain frequency, it may be determined that the audio belongs to a rock type, and then a sound effect tag of a rock is marked.
It should be noted that a plurality of output effect labels can be printed on one output object, for example, two output effect labels of "movie" and "bright" can be printed on one video.
Step S20, acquiring target output parameters matched with the output effect label;
output parameters corresponding to various output effect labels are preset in the client; the output parameter can be set according to experience, so that when the output object of the output effect label is output by the output parameter, the optimal output effect can be obtained; moreover, because the output devices corresponding to the clients may have differences in software and hardware configurations, when the same output object is displayed on different output devices, different display parameters may be required to present the best output effect, and thus, the output parameters set in different clients may be different. The output parameter corresponding to the output effect label can be a plurality of parameter values or a predetermined output mode; for example, for an output object such as a video, the output parameter may be a specific value of a display parameter of the items comprising the display, or may be one of a plurality of display modes of the display.
It should be noted that, when one output object can correspond to multiple output effect tags, output parameters corresponding to different output effect tag combinations can be preset in the client.
The client can determine the output parameters matched with the output effect labels of the output objects according to the preset corresponding relation, and the output parameters are called target output parameters.
Step S30, outputting the output object according to the target output parameter.
And the client outputs the output object according to the target output parameter. Specifically, the client adjusts the output parameter of the output device to the target output parameter, and then outputs the output object to the output device, so that the output device outputs the output object. When the output object is a video or an image, the client adjusts the display parameters of the display equipment to the display parameter values or the display modes corresponding to the target output parameters, and outputs the video data or the image data to the display equipment so as to enable the display equipment to display the video or the image. When the output object is audio, the client adjusts the sound parameters of the sound output equipment to the sound parameter values or the sound modes corresponding to the target output parameters, and outputs the audio data to the sound output equipment so that the sound output equipment can output the audio.
Further, the step S30 includes:
step S301, calling a parameter adjusting interface to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object according to the target output parameter.
In an embodiment, an interface for adjusting various output parameters, that is, a parameter adjustment interface, is provided in the client, and after the target output parameter is determined, the client may invoke the parameter adjustment structure to adjust the output parameter of the output device to the target output parameter, so that the output device outputs the output object with the target output parameter. It should be noted that, if different output parameters have different interfaces, the client calls the interface corresponding to the target output parameter to implement parameter adjustment.
In this embodiment, the target output parameters matched with the output effect tags are obtained by obtaining the output effect tags corresponding to the output objects, and the output objects are output according to the target output parameters, so that the output objects of different types and different types are output by adopting different output parameters, thereby realizing that the output objects of different types can have the best output effect, and improving the visual and sensory experience of users. In addition, at present, the mode of carrying out real-time analysis in the process of outputting an object by an artificial intelligence algorithm to adjust the output effect has high calculation force requirements on hardware equipment, and a common smart phone client cannot meet the requirements and cannot be applied; compare in this scheme, through the mode of marking the output object in this embodiment for when needs export the output object, only need obtain corresponding output effect label, confirm output parameter according to the label, can reach better output effect according to output parameter output object, do not have extra requirement to the hardware power of calculation, thereby need not additionally increase the hardware cost, application scope is wider.
Further, a second embodiment of the output effect adjustment method of the present invention is proposed based on the above-described first embodiment, and in this embodiment, the step S10 includes:
step S101, sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
in this embodiment, each resource (video, image, or audio) in the server may be marked with an output effect tag in advance, and the output effect tag is added to the description data of the resource. Specifically, each resource in the database of the current server has corresponding description data, such as description data describing the type of the resource, the data size of the resource, and the like; in this embodiment, an output effect tag may be added to the existing description data, for example, the tag may be added by using add function of the database. The method can be manually operated in the database to label each resource, and can also be automatically operated in the database to label the resources by analyzing the resources by the server.
When the client requests the resource from the server, the server returns the resource and the description data with the output effect label to the client; or after the client acquires the resource, when the resource is taken as an output object, the client sends a data request for requesting description data to the server independently aiming at the output object, and the server returns the description data to the client.
And step S102, analyzing the received description data to obtain an output effect label.
And after receiving the description data corresponding to the output object, the client analyzes the description data to obtain an output effect label. The data format of the description data may adopt a common data format, for example, a json (lightweight data exchange format) data format, and the manner of analyzing the description data may adopt an existing analyzing manner, which is not described in detail herein.
Fig. 3 is a schematic view illustrating a video display effect adjustment process. The client can establish connection with the server through a universal HTTP (Hypertext Transfer Protocol) network Protocol, the client requests background data, the server issues videos and description data to the client, the client analyzes the description data to obtain the content of a field of a display effect label, target display parameters are determined according to the content of the field, and a corresponding display parameter adjusting interface is called to adjust the display parameters so as to display the videos in the display equipment according to the target display parameters.
In this embodiment, the output effect tags are uniformly marked on the resources in the server and added to the existing resource description data, so that the client only needs to perform conventional analysis on the description data of the output object to obtain the output effect tags, and then the appropriate output parameters can be matched according to the output effect tags, thereby outputting the output object with the optimal output effect in the client. No extra requirement is required for the hardware computing power of the client, so that the hardware cost is reduced, and the application range is wider.
Further, the step S20 includes:
step S201, searching a preset mapping relation table for a target output parameter matched with the output effect tag, where the preset mapping relation table includes output parameters corresponding to various output effect tags.
In an embodiment, the mapping relationship table may be used to record output parameters corresponding to each type of output effect tag. After the client acquires the output effect label corresponding to the output object, the client searches a target output parameter matched with the output effect label in the mapping relation table.
Further, the method further comprises:
step S40, receiving a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
in an embodiment, if a new type of output effect tag is added to the server, for example, for an output object such as a video, a display effect tag of "soft" is newly added, an output parameter corresponding to the output effect tag may also be added to the server, and the server sends the output effect tag and the output parameter to the client as a new mapping entry. It should be noted that, since different clients may set different output parameters for different clients due to different hardware configurations of the output device, the new mapping entries sent by the server to different clients are different.
Further, the server may send a new mapping entry corresponding to the new output effect tag to the client when the new output effect tag is included in the output effect tags corresponding to the resources requested by the client.
And step S50, adding the new mapping entry to the preset mapping relation table.
And the client adds the received new mapping item into the mapping relation table so that when the new output effect tag exists in the output effect tags corresponding to the subsequent output objects, the output parameters corresponding to the new output effect tag can be searched from the mapping relation table, and therefore the adjustment of the output effect of the new output object is realized.
In an embodiment, when an output effect tag is newly added in the server, corresponding output parameters may also be synchronously added in the client. Compared with the mode, the mode of sending the new mapping entry to the client through the server can save programs developed by the client manually, and the client does not need to update versions.
Further, based on the first and/or second embodiments, a third embodiment of the output effect adjustment method of the present invention is provided, in this embodiment, the step S10 includes:
step S103, extracting picture frames from the video;
and step S104, carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
In this embodiment, when the output object is a video, the client may extract a picture frame from the video. The frame extraction may be randomly extracting one frame, or randomly extracting multiple frames, or may be presetting the number of frames required to extract the frame, and then determining the interval of the extracted frame according to the total length of the video, that is, extracting each frame from the video at equal intervals.
And the client analyzes the image of the extracted picture frame to obtain an analysis result, and generates an output effect label of the video according to the analysis result. According to different predefined display effect labels, different image analyses can be performed on the picture frame to determine whether to print the corresponding display effect label on the video. For example, when the display effect label of "soft" is predefined, the client may perform image sharpness analysis on the picture frames to determine sharpness values of the picture frames, and when there are multiple picture frames, an average value may be calculated from the sharpness values of the respective picture frames, and the average value is used as the sharpness value of the video; the client compares the sharpness value of the video with a preset value, if the sharpness value is smaller than the preset value, the video picture is determined to be soft, a soft display effect label is generated for the video, and otherwise, the label is not marked; or, a plurality of softness labels are predefined, correspond to different softness, and sharpness values matched with the different softness labels are set, after the sharpness values of the video are obtained through analysis by the client, the sharpness values can be compared with sharpness values corresponding to the softness labels to determine the softness labels matched with the sharpness values of the video, and the matched softness labels are used as display effect labels of the video.
Further, the step S104 includes:
step S1041, analyzing the image brightness of the picture frame to obtain the brightness value of the picture frame;
in one embodiment, different brightness labels may be predefined, and different brightness values are set for each brightness label. The client can analyze the image brightness of the extracted image frames to determine the brightness values of the image frames, and when a plurality of image frames exist, the average value of the brightness values of the image frames can be calculated and used as the brightness value of the video.
Step S1042, selecting a brightness label matching the brightness value from the alternative brightness labels as an output effect label of the video.
And taking each brightness label as a candidate brightness label, and selecting the brightness label matched with the brightness value of the video from the candidate brightness labels as an output effect label (display effect label) of the video.
In this embodiment, the client may automatically generate an output effect tag of the video in a manner of extracting a picture frame from the video and performing image analysis on the picture frame, without manually tagging the output effect tag.
In another embodiment, the server may also automatically generate an output effect tag of the video by extracting the frame of the video and performing image analysis on the frame of the video.
In other embodiments, the client may first send a data request for a video to the server, obtain an output effect tag from the server, and generate the tag of the video in the automatic tagging manner when the output effect tag of the video does not exist in the server, thereby implementing adjustment of the display effect of the video.
In addition, an embodiment of the present invention further provides an output effect adjusting apparatus, and referring to fig. 4, the apparatus includes:
the first obtaining module 10 is configured to obtain an output effect tag corresponding to an output object;
a second obtaining module 20, configured to obtain a target output parameter matched with the output effect tag; an output module 30 for outputting the output object according to the target output parameter
Further, the apparatus is deployed in a client, the client establishes a communication connection with a server, and the first obtaining module 10 includes:
the sending unit is used for sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and the analysis unit is used for analyzing the received description data to obtain an output effect label.
Further, the second obtaining module 20 includes:
and the searching unit is used for searching the target output parameters matched with the output effect labels from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
Further, the apparatus further comprises:
a receiving module, configured to receive a new mapping entry sent by the server, where the new mapping entry includes a new type of output effect tag and an output parameter corresponding to the new type of output effect tag;
and the adding module is used for adding the new mapping item to the preset mapping relation table.
Further, when the output object is a video, the first obtaining module 10 includes:
an extraction unit for extracting a picture frame from the video;
and the analysis unit is used for carrying out image analysis on the picture frame to obtain an analysis result and generating an output effect label of the video according to the analysis result.
Further, the analysis unit includes:
the analysis subunit is used for carrying out image brightness analysis on the picture frame to obtain a brightness value of the picture frame;
and the determining subunit is used for selecting the brightness label matched with the brightness value from the alternative brightness labels as the output effect label of the video.
Further, the output module 30 includes:
and the calling unit is used for calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment according to the target output parameters.
The specific implementation of the output effect adjusting apparatus of the present invention is basically the same as the embodiments of the output effect adjusting method, and is not described herein again.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an output effect adjustment program is stored on the storage medium, and when the output effect adjustment program is executed by a processor, the steps of the output effect adjustment method are implemented as follows.
The embodiments of the output effect adjustment device and the computer-readable storage medium of the present invention can refer to the embodiments of the output effect adjustment method of the present invention, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An output effect adjustment method, characterized in that the method comprises:
acquiring an output effect label corresponding to an output object;
acquiring a target output parameter matched with the output effect label;
and outputting the output object according to the target output parameter.
2. The output effect adjustment method according to claim 1, wherein the method is applied to a client, the client establishes a communication connection with a server, and the step of obtaining the output effect tag corresponding to the output object includes:
sending a data request aiming at the output object to the server, so that the server can obtain the description data of the output object from a database according to the data request and return the description data;
and analyzing the received description data to obtain an output effect label.
3. The output effect adjustment method of claim 2, wherein the step of obtaining the target output parameter matching the output effect tag comprises:
and searching a target output parameter matched with the output effect label from a preset mapping relation table, wherein the preset mapping relation table comprises output parameters respectively corresponding to various output effect labels.
4. The output effect adjustment method according to claim 3, wherein the step of obtaining the output effect label corresponding to the output object is preceded by:
receiving a new mapping entry sent by the server, wherein the new mapping entry comprises a new type of output effect label and an output parameter corresponding to the new type of output effect label;
and adding the new mapping entry to the preset mapping relation table.
5. The output effect adjustment method of claim 1, wherein when the output object is a video, the step of acquiring the output effect tag corresponding to the output object comprises:
extracting picture frames from the video;
and carrying out image analysis on the picture frame to obtain an analysis result, and generating an output effect label of the video according to the analysis result.
6. The output effect adjustment method of claim 5, wherein the step of performing image analysis on the picture frame to obtain an analysis result, and generating the output effect label of the video according to the analysis result comprises:
analyzing the image brightness of the image frame to obtain the brightness value of the image frame;
and selecting the brightness label matched with the brightness value from the alternative brightness labels as an output effect label of the video.
7. The output effect adjustment method of any one of claims 1 to 6, wherein the step of outputting the output object in accordance with the target output parameter includes:
and calling a parameter adjusting interface to adjust the output parameters of the output equipment to the target output parameters so as to output the output object in the output equipment by using the target output parameters.
8. An output effect adjustment apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an output effect label corresponding to an output object;
the second acquisition module is used for acquiring target output parameters matched with the output effect labels;
and the output module is used for outputting the output object according to the target output parameter.
9. An output effect adjustment apparatus characterized by comprising: memory, a processor and an output effect adjustment program stored on the memory and executable on the processor, the output effect adjustment program when executed by the processor implementing the steps of the output effect adjustment method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that an output effect adjustment program is stored thereon, which when executed by a processor implements the steps of the output effect adjustment method according to any one of claims 1 to 7.
CN202011109704.4A 2020-10-16 2020-10-16 Output effect adjusting method, device, equipment and computer readable storage medium Pending CN112235633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011109704.4A CN112235633A (en) 2020-10-16 2020-10-16 Output effect adjusting method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011109704.4A CN112235633A (en) 2020-10-16 2020-10-16 Output effect adjusting method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112235633A true CN112235633A (en) 2021-01-15

Family

ID=74118396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011109704.4A Pending CN112235633A (en) 2020-10-16 2020-10-16 Output effect adjusting method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112235633A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339301A (en) * 2021-12-03 2022-04-12 深圳感臻智能股份有限公司 Dynamic sound effect switching method and system based on different scenes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103731722A (en) * 2013-11-27 2014-04-16 乐视致新电子科技(天津)有限公司 Method and device for adjusting sound effect in self-adaption mode
CN106126168A (en) * 2016-06-16 2016-11-16 广东欧珀移动通信有限公司 A kind of sound effect treatment method and device
CN108462895A (en) * 2017-02-21 2018-08-28 阿里巴巴集团控股有限公司 Sound effect treatment method, device and machine readable media
CN111131889A (en) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 Method and system for adaptively adjusting images and sounds in scene and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103731722A (en) * 2013-11-27 2014-04-16 乐视致新电子科技(天津)有限公司 Method and device for adjusting sound effect in self-adaption mode
CN106126168A (en) * 2016-06-16 2016-11-16 广东欧珀移动通信有限公司 A kind of sound effect treatment method and device
CN108462895A (en) * 2017-02-21 2018-08-28 阿里巴巴集团控股有限公司 Sound effect treatment method, device and machine readable media
CN111131889A (en) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 Method and system for adaptively adjusting images and sounds in scene and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339301A (en) * 2021-12-03 2022-04-12 深圳感臻智能股份有限公司 Dynamic sound effect switching method and system based on different scenes

Similar Documents

Publication Publication Date Title
CN110933490B (en) Automatic adjustment method for picture quality and tone quality, smart television and storage medium
US20150039993A1 (en) Display device and display method
WO2007082442A1 (en) An electronic program guide interface customizing method, server, set top box and system
CN111459594A (en) Display interface processing method, device and system
US10477265B2 (en) Method and apparatus for requesting data, and method and apparatus for obtaining data
CN112911318B (en) Live broadcast room background replacement method and device, electronic equipment and storage medium
CN111381749A (en) Image display and processing method, device, equipment and storage medium
CN101115180B (en) Electronic program menu system and functional module dynamic load operating method
KR20020038525A (en) Method for delivering stored images, recording medium and apparatus for delivering stored images
CN113190152A (en) Method and device for switching application program theme
CN112235633A (en) Output effect adjusting method, device, equipment and computer readable storage medium
CN111240776A (en) Dynamic wallpaper setting method and device, storage medium and electronic equipment
CN107566860B (en) Video EPG acquisition and playing method, cloud platform server, television and system
US20180192121A1 (en) System and methods thereof for displaying video content
WO2023071749A1 (en) Parameter adjustment method and apparatus, and storage medium and electronic apparatus
CN111757187A (en) Multi-language subtitle display method, device, terminal equipment and storage medium
WO2016035061A1 (en) A system for preloading imagized video clips in a web-page
CN102625180A (en) Television apparatus and method for searching subtitles of television program
CN108024032A (en) Image enhancement processing method, TV, server and computer-readable recording medium
CN113779446A (en) Access request response method, device, system, equipment and storage medium
CN108235109B (en) Table information transmission method, smart television and computer readable storage medium
CN113014848A (en) Video call method, device and computer storage medium
KR101432309B1 (en) Web viewer server and control method thereof, system for providing markup page comprising the web viewer server and control method thereof
CN111008062A (en) Interface setting method, device, equipment and medium for application program APP
CN109005420B (en) Video frame playing and acquiring method, television, cloud platform server and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115

RJ01 Rejection of invention patent application after publication