CN112015276A - Man-machine interaction method comprising display effect adjustment and volume control - Google Patents

Man-machine interaction method comprising display effect adjustment and volume control Download PDF

Info

Publication number
CN112015276A
CN112015276A CN202010938778.2A CN202010938778A CN112015276A CN 112015276 A CN112015276 A CN 112015276A CN 202010938778 A CN202010938778 A CN 202010938778A CN 112015276 A CN112015276 A CN 112015276A
Authority
CN
China
Prior art keywords
data
volume
information
display
adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010938778.2A
Other languages
Chinese (zh)
Other versions
CN112015276B (en
Inventor
梁小健
肖美翟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Nanfang Yixin Computer Information System Co ltd
Original Assignee
Shenzhen Nanfang Yixin Computer Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Nanfang Yixin Computer Information System Co ltd filed Critical Shenzhen Nanfang Yixin Computer Information System Co ltd
Priority to CN202010938778.2A priority Critical patent/CN112015276B/en
Publication of CN112015276A publication Critical patent/CN112015276A/en
Application granted granted Critical
Publication of CN112015276B publication Critical patent/CN112015276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention discloses a man-machine interaction method comprising display effect adjustment and volume control, which specifically comprises the following steps: the method comprises the following steps: collecting and adjusting related interactive information through a collecting unit, wherein the interactive information comprises display information, sound effect information and voice information, transmitting the sound effect information to a volume calculating unit, transmitting the display information to a display analyzing unit, and transmitting the voice information to a recognition unit; step two: the invention analyzes and calculates the corresponding adjusting value by the arrangement of the display analysis unit and the sound volume calculation unit, thereby increasing the accuracy of data calculation, avoiding the deviation of artificial adjustment, saving the time consumed by data analysis and improving the working efficiency.

Description

Man-machine interaction method comprising display effect adjustment and volume control
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction method comprising display effect adjustment and volume control.
Background
Human-computer interaction and human-computer interaction are studies on the interaction relationship between a research system and a user. The system may be a variety of machines, and may be a computerized system and software. The human-computer interaction interface generally refers to a portion visible to a user. And the user communicates with the system through a human-computer interaction interface and performs operation. Such as the play button of a radio, the instrument panel of an airplane, or the control room of a power plant. The design of the human-machine interface is to include the user's understanding of the system, that is for the usability or user-friendliness of the system.
At present, human-computer interaction in the market is adjusted manually, adjustment operation errors are prone to occurring, repeated adjustment is needed, time is consumed, and therefore working efficiency is low.
Disclosure of Invention
The invention aims to provide a man-machine interaction method comprising display effect adjustment and volume control, which is characterized in that voice recognition is carried out on collected voice through the setting of a recognition unit, and related data after recognition is marked, so that the time consumed by voice recognition is saved, the working efficiency is improved, the acquired display information and the acquired effect information are respectively subjected to data analysis through the setting of a display analysis unit and a volume calculation unit, so that the corresponding adjustment value is analyzed and calculated, the accuracy of data calculation is improved, the deviation caused by artificial adjustment is avoided, the time consumed by data analysis is saved, the working efficiency is improved, the display and the volume are adjusted through the setting of an execution unit, the time consumed by artificial operation adjustment is saved, and the working efficiency is improved.
The purpose of the invention can be realized by the following technical scheme: a man-machine interaction method comprising display effect adjustment and volume control specifically comprises the following steps:
the method comprises the following steps: collecting and adjusting related interactive information through a collecting unit, wherein the interactive information comprises display information, sound effect information and voice information, transmitting the sound effect information to a volume calculating unit, transmitting the display information to a display analyzing unit, and transmitting the voice information to a recognition unit;
step two: the recognition unit acquires the character information from the database, performs recognition operation on the character information and the voice information together to obtain a voice regulation command and a display regulation command, and transmits the voice regulation command and the display regulation command to the volume calculation unit and the display analysis unit respectively;
step three: monitoring the surrounding environment information of the real-time monitoring equipment through a monitoring unit, and respectively transmitting the environment information to a display analysis unit and a sound volume calculation unit, wherein the surrounding environment information of the equipment refers to the environment within a set distance around the equipment;
step four: the display adjustment command, the display information and the environment information are subjected to display adjustment operation through the display analysis unit to obtain a brightness adjustment difference value and a frame number adjustment difference value, and the brightness adjustment difference value and the frame number adjustment difference value are transmitted to the execution unit;
step five: receiving the environment information through a volume calculation unit, performing calculation operation on the environment information, the voice adjustment command and the voice information to obtain adjusted volume and adjusted decibel, and transmitting the adjusted volume and the adjusted decibel to an execution unit;
step six: the execution unit receives and adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference, adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference according to the volume, generates a corresponding completion signal after adjustment, and transmits the completion signal to the display screen, wherein the display screen is used for displaying the completion signal.
As a further improvement of the invention: the specific operation process of the identification operation comprises the following steps:
k1: acquiring character information, calibrating a word formed by a plurality of characters in the character information into character group data, marking the character group as ZZi, i as 1,2,3.. n1, calibrating an execution operation corresponding to the character group in the character group as an adjustment command, marking the adjustment command as TMi, i as 1,2,3.. n1, marking an intra-adjustment-command voice adjustment command as YTi, i as 1,2,3.. n1, and marking an intra-adjustment-command display adjustment command as XTi, i as 1,2,3.. n 1;
k2: acquiring voice information, converting the voice into character data through voice recognition, marking each character as character data, and marking the character data as ZFl, wherein l is 1,2,3.. n 2;
k3: acquiring the character data in the K2, the character group data and the adjusting command in the K1, matching the character data with the number of the character groups, automatically extracting the corresponding adjusting command after matching the character data corresponding to the character group data, and not extracting the command when the character data corresponding to the character group data is not matched;
k4: the adjustment command in K3 is obtained and identified.
As a further improvement of the invention: the specific operation process of the display adjustment operation comprises the following steps:
h1: acquiring a display adjusting command, and extracting corresponding display information and environment information according to the display adjusting command;
h2: acquiring display information, calibrating display brightness in the display information as brightness data, marking the brightness data as LDi, i as 1,2,3.. No. n1, calibrating display frame number in the display information as frame number data, marking the frame number data as ZSi, i as 1,2,3.. No. n1, calibrating illumination difference in the display information as light ray difference data, and marking the light ray difference data as GCi, i as 1,2,3.. No. n 1;
h3: acquiring environment information, calibrating the brightness of the environment in the environment as light brightness data, and marking the light brightness data as GLi, i-1, 2,3.. n 1;
h4: and (3) bringing the light brightness data and the light difference data into a calculation formula together: calculating the required display value by the light brightness data-light difference data, and substituting the required display value into the calculation formula: and calculating the actual brightness data and the actual frame number data, and performing difference calculation on the actual brightness data and the actual frame number data, so as to calculate a brightness adjustment difference and a frame number adjustment difference.
As a further improvement of the invention: the specific operation process of the calculation operation is as follows:
g1: acquiring a voice adjusting command, and extracting corresponding environment information and voice information according to the voice adjusting command;
g2: acquiring voice information, calibrating the volume of the voice information into volume data, marking the volume data as YLi, namely 1,2,3.. No. n1, calibrating the decibel size of the volume of the voice information into decibel data, and marking the decibel data as FBi, namely 1,2,3.. No. n 1;
g3: acquiring environment information, calibrating the type of sound in the environment as type data, marking the type data as ZLi, wherein i is 1,2,3.. No. n1, calibrating the volume of the environment in the environment as environment volume data, and marking the environment volume data as HYi, wherein i is 1,2,3.. No. n 1;
g4: the method comprises the following steps of obtaining the category data, extracting the environment volume data of each category data, setting a safety volume preset value, and comparing the safety volume preset value with the environment volume data, wherein the method specifically comprises the following steps: when the preset value of the safe volume is larger than the environmental volume data, judging that the external volume is normal and generating a normal signal, and when the preset value of the safe volume is smaller than or equal to the environmental volume data, judging that the volume data is large and generating a large volume signal;
g5: acquiring a large volume signal, counting the occurrence frequency of the large volume signal, calibrating the large volume signal into frequency data, setting a safety frequency preset value, and comparing the safety frequency preset value with the frequency data, wherein the specific steps are as follows: when the frequency data is greater than or equal to a safety frequency preset value, judging that the volume is noisy, and generating an abnormal signal, and when the frequency data is less than the safety frequency preset value, judging that the volume is flat, and generating a normal signal;
g6: acquiring a normal signal and an abnormal signal, identifying the normal signal and the abnormal signal, automatically extracting corresponding volume data and decibel data when the abnormal signal is identified, setting a difference value of predicted environment volume data, and bringing the difference value and the environment volume data into a calculation formula: and (3) the difference value of the predicted environment volume data is predicted volume-environment volume data and environment volume data conversion value, and the predicted volume, the volume data and decibel data are brought into a calculation formula together: and (4) predicting the volume (volume data + adjusting volume) and the volume adjusting deviation factor (decibel data + adjusting decibel) and the decibel adjusting deviation factor/converting and correcting factor, thereby calculating the adjusting volume and the adjusting decibel, and when a normal signal is identified, not extracting data.
The invention has the beneficial effects that:
(1) collecting and adjusting related interactive information through a collecting unit, wherein the interactive information comprises display information, sound effect information and voice information, transmitting the sound effect information to a volume calculating unit, transmitting the display information to a display analyzing unit, and transmitting the voice information to a recognition unit; the recognition unit obtains the character information from the database, carries out recognition operation on the character information and the voice information together to obtain a voice regulation command and a display regulation command, carries out voice recognition on the collected voice through the setting of the recognition unit, marks the recognized related data, saves the time consumed by voice recognition and improves the working efficiency.
(2) Monitoring the surrounding environment information of the real-time monitoring equipment through a monitoring unit, and respectively transmitting the environment information to a display analysis unit and a sound volume calculation unit, wherein the surrounding environment information of the equipment refers to the environment within a set distance around the equipment; the display adjustment command, the display information and the environment information are subjected to display adjustment operation through the display analysis unit to obtain a brightness adjustment difference value and a frame number adjustment difference value, and the brightness adjustment difference value and the frame number adjustment difference value are transmitted to the execution unit; the environment information is received through the volume calculation unit, calculation operation is carried out on the environment information, the voice adjustment command and the voice information together, the adjusted volume and the adjusted decibel are obtained, data analysis is carried out on the obtained display information and the obtained effect information respectively through the display analysis unit and the volume calculation unit, accordingly, corresponding adjustment values are analyzed and calculated, the accuracy of data calculation is improved, deviation caused by manual adjustment is avoided, time consumed by data analysis is saved, and working efficiency is improved.
(3) Receiving and adjusting the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference through an execution unit, adjusting according to the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference, generating a corresponding completion signal after adjustment, and transmitting the completion signal to a display screen, wherein the display screen is used for displaying the completion signal; through the setting of the execution unit, the display and the volume are adjusted, the time consumed by manual operation and adjustment is saved, and the working efficiency is improved.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention is a human-computer interaction method including display effect adjustment and volume control, which specifically includes the following steps:
the method comprises the following steps: collecting and adjusting related interactive information through a collecting unit, wherein the interactive information comprises display information, sound effect information and voice information, transmitting the sound effect information to a volume calculating unit, transmitting the display information to a display analyzing unit, and transmitting the voice information to a recognition unit;
step two: the database stores character information, the recognition unit acquires the character information from the database and carries out recognition operation together with voice information, and the specific operation process of the recognition operation is as follows:
k1: acquiring character information, calibrating a word formed by a plurality of characters in the character information into character group data, marking the character group as ZZi, i as 1,2,3.. n1, calibrating an execution operation corresponding to the character group in the character group as an adjustment command, marking the adjustment command as TMi, i as 1,2,3.. n1, marking an intra-adjustment-command voice adjustment command as YTi, i as 1,2,3.. n1, and marking an intra-adjustment-command display adjustment command as XTi, i as 1,2,3.. n 1;
k2: acquiring voice information, converting the voice into character data through voice recognition, marking each character as character data, and marking the character data as ZFl, wherein l is 1,2,3.. n 2;
k3: acquiring the character data in the K2, the character group data and the adjusting command in the K1, matching the character data with the number of the character groups, automatically extracting the corresponding adjusting command after matching the character data corresponding to the character group data, and not extracting the command when the character data corresponding to the character group data is not matched;
k4: acquiring and identifying the adjusting command in the K3, transmitting the voice adjusting command to a volume calculating unit when the voice adjusting command is identified, and transmitting the display adjusting command to a display analyzing unit when the display adjusting command is identified;
step three: monitoring the surrounding environment information of the real-time monitoring equipment through a monitoring unit, and respectively transmitting the environment information to a display analysis unit and a sound volume calculation unit, wherein the surrounding environment information of the equipment refers to the environment within a set distance around the equipment;
step four: the display adjustment command, the display information and the environment information are displayed and adjusted together through the display analysis unit, and the specific operation process of the display adjustment operation is as follows:
h1: acquiring a display adjusting command, and extracting corresponding display information and environment information according to the display adjusting command;
h2: acquiring display information, calibrating display brightness in the display information as brightness data, marking the brightness data as LDi, i as 1,2,3.. No. n1, calibrating display frame number in the display information as frame number data, marking the frame number data as ZSi, i as 1,2,3.. No. n1, calibrating illumination difference in the display information as light ray difference data, and marking the light ray difference data as GCi, i as 1,2,3.. No. n 1;
h3: acquiring environment information, calibrating the brightness of the environment in the environment as light brightness data, and marking the light brightness data as GLi, i-1, 2,3.. n 1;
h4: and (3) bringing the light brightness data and the light difference data into a calculation formula together: calculating the required display value by the light brightness data-light difference data, and substituting the required display value into the calculation formula: calculating the actual brightness data and the actual frame number data, and performing difference calculation on the actual brightness data and the actual frame number data, and calculating a brightness adjustment difference and a frame number adjustment difference;
h5: transmitting the brightness adjustment difference value and the frame number adjustment difference value to an execution unit;
step five: receiving the environment information through the volume calculating unit, and performing calculating operation on the environment information, the voice adjusting command and the voice information together, wherein the specific operation process of the calculating operation is as follows:
g1: acquiring a voice adjusting command, and extracting corresponding environment information and voice information according to the voice adjusting command;
g2: acquiring voice information, calibrating the volume of the voice information into volume data, marking the volume data as YLi, namely 1,2,3.. No. n1, calibrating the decibel size of the volume of the voice information into decibel data, and marking the decibel data as FBi, namely 1,2,3.. No. n 1;
g3: acquiring environment information, calibrating the type of sound in the environment as type data, marking the type data as ZLi, wherein i is 1,2,3.. No. n1, calibrating the volume of the environment in the environment as environment volume data, and marking the environment volume data as HYi, wherein i is 1,2,3.. No. n 1;
g4: the method comprises the following steps of obtaining the category data, extracting the environment volume data of each category data, setting a safety volume preset value, and comparing the safety volume preset value with the environment volume data, wherein the method specifically comprises the following steps: when the preset value of the safe volume is larger than the environmental volume data, judging that the external volume is normal and generating a normal signal, and when the preset value of the safe volume is smaller than or equal to the environmental volume data, judging that the volume data is large and generating a large volume signal;
g5: acquiring a large volume signal, counting the occurrence frequency of the large volume signal, calibrating the large volume signal into frequency data, setting a safety frequency preset value, and comparing the safety frequency preset value with the frequency data, wherein the specific steps are as follows: when the frequency data is greater than or equal to a safety frequency preset value, judging that the volume is noisy, and generating an abnormal signal, and when the frequency data is less than the safety frequency preset value, judging that the volume is flat, and generating a normal signal;
g6: acquiring a normal signal and an abnormal signal, identifying the normal signal and the abnormal signal, automatically extracting corresponding volume data and decibel data when the abnormal signal is identified, setting a difference value of predicted environment volume data, and bringing the difference value and the environment volume data into a calculation formula: and (3) the difference value of the predicted environment volume data is predicted volume-environment volume data and environment volume data conversion value, and the predicted volume, the volume data and decibel data are brought into a calculation formula together: estimating the volume (volume data + adjusting volume) and volume adjusting deviation factor (decibel data + adjusting decibel) and decibel adjusting deviation factor/conversion correction factor, thereby calculating the adjusting volume and the adjusting decibel, and when a normal signal is identified, not extracting data;
g7: transmitting the adjusted volume and the adjusted decibel to an execution unit;
step six: the execution unit receives and adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference, adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference according to the volume, generates a corresponding completion signal after adjustment, and transmits the completion signal to the display screen, wherein the display screen is used for displaying the completion signal.
When the intelligent voice recognition device works, the acquisition unit acquires and adjusts related interactive information, wherein the interactive information comprises display information, sound effect information and voice information, the sound effect information is transmitted to the volume calculation unit, the display information is transmitted to the display analysis unit, and the voice information is transmitted to the recognition unit; the recognition unit acquires the character information from the database, performs recognition operation on the character information and the voice information together to obtain a voice regulation command and a display regulation command, and transmits the voice regulation command and the display regulation command to the volume calculation unit and the display analysis unit respectively; monitoring the surrounding environment information of the real-time monitoring equipment through a monitoring unit, and respectively transmitting the environment information to a display analysis unit and a sound volume calculation unit, wherein the surrounding environment information of the equipment refers to the environment within a set distance around the equipment; the display adjustment command, the display information and the environment information are subjected to display adjustment operation through the display analysis unit to obtain a brightness adjustment difference value and a frame number adjustment difference value, and the brightness adjustment difference value and the frame number adjustment difference value are transmitted to the execution unit; receiving the environment information through a volume calculation unit, performing calculation operation on the environment information, the voice adjustment command and the voice information to obtain adjusted volume and adjusted decibel, and transmitting the adjusted volume and the adjusted decibel to an execution unit; the execution unit receives and adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference, adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference according to the volume, generates a corresponding completion signal after adjustment, and transmits the completion signal to the display screen, wherein the display screen is used for displaying the completion signal.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (4)

1. A man-machine interaction method comprising display effect adjustment and volume control is characterized by specifically comprising the following steps:
the method comprises the following steps: collecting and adjusting related interactive information through a collecting unit, wherein the interactive information comprises display information, sound effect information and voice information, transmitting the sound effect information to a volume calculating unit, transmitting the display information to a display analyzing unit, and transmitting the voice information to a recognition unit;
step two: the recognition unit acquires the character information from the database, performs recognition operation on the character information and the voice information together to obtain a voice regulation command and a display regulation command, and transmits the voice regulation command and the display regulation command to the volume calculation unit and the display analysis unit respectively;
step three: monitoring the surrounding environment information of the real-time monitoring equipment through a monitoring unit, and respectively transmitting the environment information to a display analysis unit and a sound volume calculation unit, wherein the surrounding environment information of the equipment refers to the environment within a set distance around the equipment;
step four: the display adjustment command, the display information and the environment information are subjected to display adjustment operation through the display analysis unit to obtain a brightness adjustment difference value and a frame number adjustment difference value, and the brightness adjustment difference value and the frame number adjustment difference value are transmitted to the execution unit;
step five: receiving the environment information through a volume calculation unit, performing calculation operation on the environment information, the voice adjustment command and the voice information to obtain adjusted volume and adjusted decibel, and transmitting the adjusted volume and the adjusted decibel to an execution unit;
step six: the execution unit receives and adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference, adjusts the volume, the decibel, the brightness adjustment difference and the frame number adjustment difference according to the volume, generates a corresponding completion signal after adjustment, and transmits the completion signal to the display screen, wherein the display screen is used for displaying the completion signal.
2. The human-computer interaction method comprising display effect adjustment and volume control according to claim 1, wherein the specific operation process of the recognition operation is as follows:
k1: acquiring character information, calibrating a word formed by a plurality of characters in the character information into character group data, marking the character group as ZZi, i as 1,2,3.. n1, calibrating an execution operation corresponding to the character group in the character group as an adjustment command, marking the adjustment command as TMi, i as 1,2,3.. n1, marking an intra-adjustment-command voice adjustment command as YTi, i as 1,2,3.. n1, and marking an intra-adjustment-command display adjustment command as XTi, i as 1,2,3.. n 1;
k2: acquiring voice information, converting the voice into character data through voice recognition, marking each character as character data, and marking the character data as ZFl, wherein l is 1,2,3.. n 2;
k3: acquiring the character data in the K2, the character group data and the adjusting command in the K1, matching the character data with the number of the character groups, automatically extracting the corresponding adjusting command after matching the character data corresponding to the character group data, and not extracting the command when the character data corresponding to the character group data is not matched;
k4: the adjustment command in K3 is obtained and identified.
3. The human-computer interaction method comprising display effect adjustment and volume control according to claim 1, wherein the specific operation process of the display adjustment operation is as follows:
h1: acquiring a display adjusting command, and extracting corresponding display information and environment information according to the display adjusting command;
h2: acquiring display information, calibrating display brightness in the display information as brightness data, marking the brightness data as LDi, i as 1,2,3.. No. n1, calibrating display frame number in the display information as frame number data, marking the frame number data as ZSi, i as 1,2,3.. No. n1, calibrating illumination difference in the display information as light ray difference data, and marking the light ray difference data as GCi, i as 1,2,3.. No. n 1;
h3: acquiring environment information, calibrating the brightness of the environment in the environment as light brightness data, and marking the light brightness data as GLi, i-1, 2,3.. n 1;
h4: and (3) bringing the light brightness data and the light difference data into a calculation formula together: calculating the required display value by the light brightness data-light difference data, and substituting the required display value into the calculation formula: and calculating the actual brightness data and the actual frame number data, and performing difference calculation on the actual brightness data and the actual frame number data, so as to calculate a brightness adjustment difference and a frame number adjustment difference.
4. The human-computer interaction method comprising display effect adjustment and volume control according to claim 1, wherein the specific operation process of the calculation operation is as follows:
g1: acquiring a voice adjusting command, and extracting corresponding environment information and voice information according to the voice adjusting command;
g2: acquiring voice information, calibrating the volume of the voice information into volume data, marking the volume data as YLi, namely 1,2,3.. No. n1, calibrating the decibel size of the volume of the voice information into decibel data, and marking the decibel data as FBi, namely 1,2,3.. No. n 1;
g3: acquiring environment information, calibrating the type of sound in the environment as type data, marking the type data as ZLi, wherein i is 1,2,3.. No. n1, calibrating the volume of the environment in the environment as environment volume data, and marking the environment volume data as HYi, wherein i is 1,2,3.. No. n 1;
g4: the method comprises the following steps of obtaining the category data, extracting the environment volume data of each category data, setting a safety volume preset value, and comparing the safety volume preset value with the environment volume data, wherein the method specifically comprises the following steps: when the preset value of the safe volume is larger than the environmental volume data, judging that the external volume is normal and generating a normal signal, and when the preset value of the safe volume is smaller than or equal to the environmental volume data, judging that the volume data is large and generating a large volume signal;
g5: acquiring a large volume signal, counting the occurrence frequency of the large volume signal, calibrating the large volume signal into frequency data, setting a safety frequency preset value, and comparing the safety frequency preset value with the frequency data, wherein the specific steps are as follows: when the frequency data is greater than or equal to a safety frequency preset value, judging that the volume is noisy, and generating an abnormal signal, and when the frequency data is less than the safety frequency preset value, judging that the volume is flat, and generating a normal signal;
g6: acquiring a normal signal and an abnormal signal, identifying the normal signal and the abnormal signal, automatically extracting corresponding volume data and decibel data when the abnormal signal is identified, setting a difference value of predicted environment volume data, and bringing the difference value and the environment volume data into a calculation formula: and (3) the difference value of the predicted environment volume data is predicted volume-environment volume data and environment volume data conversion value, and the predicted volume, the volume data and decibel data are brought into a calculation formula together: and (4) predicting the volume (volume data + adjusting volume) and the volume adjusting deviation factor (decibel data + adjusting decibel) and the decibel adjusting deviation factor/converting and correcting factor, thereby calculating the adjusting volume and the adjusting decibel, and when a normal signal is identified, not extracting data.
CN202010938778.2A 2020-09-09 2020-09-09 Man-machine interaction method comprising display effect adjustment and volume control Active CN112015276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010938778.2A CN112015276B (en) 2020-09-09 2020-09-09 Man-machine interaction method comprising display effect adjustment and volume control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010938778.2A CN112015276B (en) 2020-09-09 2020-09-09 Man-machine interaction method comprising display effect adjustment and volume control

Publications (2)

Publication Number Publication Date
CN112015276A true CN112015276A (en) 2020-12-01
CN112015276B CN112015276B (en) 2023-06-02

Family

ID=73521353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010938778.2A Active CN112015276B (en) 2020-09-09 2020-09-09 Man-machine interaction method comprising display effect adjustment and volume control

Country Status (1)

Country Link
CN (1) CN112015276B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540742A (en) * 2020-12-02 2021-03-23 广州朗国电子科技有限公司 Method for customizing display effect of exclusive display screen of user through AI interaction
CN113640479A (en) * 2021-05-28 2021-11-12 张璐涛 Anaerobic water body monitoring system
CN113656258A (en) * 2021-10-20 2021-11-16 深圳市瑞荣达电子有限公司 Scene analysis management and control system for intelligent Bluetooth headset based on internet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105119582A (en) * 2015-09-02 2015-12-02 广东小天才科技有限公司 Method and device for automatically adjusting terminal sound
WO2015196720A1 (en) * 2014-06-26 2015-12-30 广东美的制冷设备有限公司 Voice recognition method and system
CN107395899A (en) * 2017-08-25 2017-11-24 珠海市魅族科技有限公司 Terminal control method, device, computer installation and computer-readable recording medium
CN109040414A (en) * 2017-06-12 2018-12-18 上海耕岩智能科技有限公司 A kind of method that terminal and terminal display brightness are adjusted
US20190227767A1 (en) * 2016-09-27 2019-07-25 Huawei Technologies Co., Ltd. Volume Adjustment Method and Terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196720A1 (en) * 2014-06-26 2015-12-30 广东美的制冷设备有限公司 Voice recognition method and system
CN105119582A (en) * 2015-09-02 2015-12-02 广东小天才科技有限公司 Method and device for automatically adjusting terminal sound
US20190227767A1 (en) * 2016-09-27 2019-07-25 Huawei Technologies Co., Ltd. Volume Adjustment Method and Terminal
CN109040414A (en) * 2017-06-12 2018-12-18 上海耕岩智能科技有限公司 A kind of method that terminal and terminal display brightness are adjusted
CN107395899A (en) * 2017-08-25 2017-11-24 珠海市魅族科技有限公司 Terminal control method, device, computer installation and computer-readable recording medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540742A (en) * 2020-12-02 2021-03-23 广州朗国电子科技有限公司 Method for customizing display effect of exclusive display screen of user through AI interaction
CN113640479A (en) * 2021-05-28 2021-11-12 张璐涛 Anaerobic water body monitoring system
CN113656258A (en) * 2021-10-20 2021-11-16 深圳市瑞荣达电子有限公司 Scene analysis management and control system for intelligent Bluetooth headset based on internet

Also Published As

Publication number Publication date
CN112015276B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN112015276B (en) Man-machine interaction method comprising display effect adjustment and volume control
CN105551490B (en) A kind of intelligent speech interactive system and method for electronic measuring instrument
CN107661092B (en) Vital sign state monitoring method and computer-readable storage medium
CN115454176A (en) Wisdom green house ventilation control system based on thing networking
CN107247997A (en) A kind of wind electric field blower coulometric analysis method
CN112162639B (en) Electronic warfare equipment simulation training man-machine interaction collaboration system
CN112163113A (en) Real-time monitoring system for high-voltage combined frequency converter
CN116107283A (en) AI production management system based on human-computer interaction
CN107498689B (en) A kind of pottery based on infrared scan technology draws embryo bearing calibration automatically
CN114626758A (en) Effect evaluation system for medical equipment maintenance
CN112953019A (en) Low-voltage distribution network state monitoring system with intelligent distribution transformer terminal
CN113560368A (en) Data acquisition method and system for automobile plate stamping process
CN111209888A (en) Human-computer interface visual recognition system and method
CN113409895B (en) Man-machine interaction method and device for boron meter chemical titration
CN117134508B (en) Multi-data fusion monitoring system of power distribution one-key centralized control device
CN115809830A (en) Green evaluation data analysis management system based on industrial park
CN211403219U (en) Digital management system of melt spinning production line
CN115115352B (en) Public equipment operation control system based on digital city operation management service
CN220979905U (en) Intelligent controller of fan
CN113093672B (en) Control method for DCS system to adjust opening of runner flashboard
CN110795070B (en) Virtual gateway table platform and construction method
CN111354141B (en) Machine tool assembling system and method based on Internet of things
CN101419281B (en) Radar data true north fan moveout monitoring method
CN210904693U (en) Laser therapeutic machine with hand-held key remote controller
CN106773894A (en) A kind of high-accuracy displacement sensing Port Multipliers of digitalized S PC and its on-line correction and data transmission method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518101 No.319, 3rd floor, Zhengtai commercial building, intersection of Xixiang Avenue and Jinhua Road, 76 Xixiang street, Bao'an District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Huachuang Electric Technology Co.,Ltd.

Address before: 518101 No.319, 3rd floor, Zhengtai commercial building, intersection of Xixiang Avenue and Jinhua Road, 76 Xixiang street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN NANFANG YIXIN COMPUTER INFORMATION SYSTEM Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant