CN111369997A - Method for recording conference - Google Patents

Method for recording conference Download PDF

Info

Publication number
CN111369997A
CN111369997A CN202010115493.9A CN202010115493A CN111369997A CN 111369997 A CN111369997 A CN 111369997A CN 202010115493 A CN202010115493 A CN 202010115493A CN 111369997 A CN111369997 A CN 111369997A
Authority
CN
China
Prior art keywords
obstacle
sound source
ues
coordinates
conference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010115493.9A
Other languages
Chinese (zh)
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ziyu Jieen Technology Co ltd
Original Assignee
Shenzhen Ziyu Jieen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ziyu Jieen Technology Co ltd filed Critical Shenzhen Ziyu Jieen Technology Co ltd
Priority to CN202010115493.9A priority Critical patent/CN111369997A/en
Publication of CN111369997A publication Critical patent/CN111369997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application provides a method for recording a conference, which comprises the following steps: the server acquires the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of a fixed conference scene; the method comprises the steps that a server obtains a plurality of UE coordinates of a plurality of UE, the server calculates a plurality of distances between a sound source coordinate and the plurality of UE coordinates, and x UE which are smaller than a first threshold value in the plurality of distances are determined as UE to be selected; the server determines an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removes y UEs located in the obstruction area from the x UEs to obtain x-y UEs; the server obtains x-y recording texts of the conference scene sent by the x-y UEs, and the server sorts the x-y recording texts to obtain the conference text of the conference scene. The technical scheme provided by the application has the advantage of improving the accuracy of the conference record.

Description

Method for recording conference
Technical Field
The application relates to the technical field of communication and artificial intelligence, in particular to a conference recording method.
Background
The conference recording means that in the process of a conference, a recording person records the organization condition and the specific content of the conference to form a conference record. The term "record" is used for its distinction from the detailed description and the abbreviation. The jockey is the main point in the meeting, the important or main language in the meeting. The detailed description requires that the recorded items must be complete and the recorded statements must be complete and detailed. Recording is relied upon if it is desired to leave a meeting record including the content. Recording includes writing, sound recording and video recording, and for conference recording, sound recording and video recording are usually only means, and finally recorded contents are restored into characters. Recording is often also performed by recording audio and video as a guarantee that the recorded content can maximally reproduce the meeting situation.
With the development of artificial intelligence on voice recognition, the existing conference recording introduces an artificial intelligence recognition mode to record the conference, but the existing artificial intelligence recognition mode depends on a single specific device, and if the single specific device fails, the artificial intelligence conference recording is interrupted, so that the existing conference recording mode has poor stability, and the conference recording effect is influenced.
Disclosure of Invention
The embodiment of the application discloses a conference recording method, which can record a conference through intelligent terminal equipment, get rid of single specific equipment, improve the effect of conference recording and improve the accuracy of a conference text.
A first aspect of the embodiments of the present application discloses a conference recording method, including:
the server acquires the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of a fixed conference scene;
the method comprises the steps that a server obtains a plurality of UE coordinates of a plurality of UE, the server calculates a plurality of distances between a sound source coordinate and the plurality of UE coordinates, and x UE which are smaller than a first threshold value in the plurality of distances are determined as UE to be selected;
the server determines an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removes y UEs located in the obstruction area from the x UEs to obtain x-y UEs;
the server obtains x-y recording texts of the conference scene sent by the x-y UEs, and the server sorts the x-y recording texts to obtain the conference text of the conference scene.
A second aspect of embodiments of the present application provides a terminal comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method provided in the first aspect.
A third aspect of embodiments of the present application provides a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method provided in the first aspect.
A third aspect of the embodiments of the present application provides a server, including:
the acquisition unit is used for acquiring the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of the fixed conference scene; acquiring a plurality of UE coordinates of a plurality of UEs;
the processing unit is used for calculating a plurality of distances between the sound source coordinates and the plurality of UE coordinates, and determining x UEs which are smaller than a first threshold value in the plurality of distances as the UEs to be selected; determining an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removing y UEs located in the obstruction area from the x UEs to obtain x-y UEs; acquiring x-y recorded texts of the conference scene sent by x-y UE, and settling the x-y recorded texts by the server to obtain the conference text of the conference scene.
Through implementing the embodiment of the application, the technical scheme provided by the application carries out the recorded text of the conference scene through the UE, and then adjusts the recorded text, thereby avoiding the problem that the conference text is inaccurate due to the fault of single equipment or inaccurate AI identification.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a schematic structural diagram of a conference system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a conference recording method according to an embodiment of the present application;
FIG. 2a is a schematic view of an obstruction area through which embodiments of the present application pass;
fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
A terminal in the embodiments of the present application may refer to various forms of UE, access terminal, subscriber unit, subscriber station, mobile station, MS (mobile station), remote station, remote terminal, mobile device, computer, server, cloud system user terminal, terminal device (terminal equipment), wireless communication device, user agent, or user equipment. The terminal device may also be a cellular phone, a cordless phone, an SIP (session initiation protocol) phone, a WLL (wireless local loop) station, a PDA (personal digital assistant) with a wireless communication function, a handheld device with a wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved PLMN (public land mobile network, chinese), and the like, which are not limited in this embodiment.
Referring to fig. 1, fig. 1 is a conference system including: the server 10, which may be a terminal, a personal computer, or the like, and the UEs 11, which may be smartphones, tablets, or the like with AI voice recognition function, are connected to the plurality of UEs as shown in fig. 1.
Referring to fig. 2, fig. 2 provides a conference recording method, which is performed under the conference system as shown in fig. 1, and which includes the steps of:
s200, a server acquires the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of a fixed conference scene;
the method of step S200 may specifically include:
in an optional scheme, a server acquires a first picture of a conference scene, identifies and determines a sound source position and an obstacle position of the conference scene for the first picture, and determines a sound source coordinate and an obstacle coordinate according to the sound source position, the obstacle position and a conference scene identifier. The identification mode can be identified through an artificial intelligence mode, and the scheme is suitable for scenes with moving sound sources and moving obstacles. For example, in the case of a teacher lecture, the teacher may move around in a conference scene while lecturing, and since the teacher is a moving sound source, it needs to determine the location of the cheerful through the picture recognition technology, and the location is obtained periodically. The shape characteristics of the obstacle can be referred to the acquisition mode of the coordinates of the obstacle.
In another mode, the server acquires an identifier of the conference scene, and determines the sound source coordinates and the obstacle coordinates of the conference scene according to the identifier. The technical scheme is suitable for scenes of fixed cheers and fixed obstacles, and the corresponding 2 parameter values can be obtained only by knowing which conference scene is relatively fixed. For example, when a meeting is reported, although the coordinates of the person who reports may be transformed, the position (i.e., the loudspeaker) from which the sound source is emitted does not change, and when a meeting is reported, its obstacles are also fixed.
Step S201, a server acquires a plurality of UE coordinates of a plurality of UEs, the server calculates a plurality of distances between a sound source coordinate and the plurality of UE coordinates, and x UEs which are smaller than a first threshold value in the plurality of distances are determined as UEs to be selected;
the UE coordinates may be GPS coordinates, beidou coordinates, and the like, and the UE coordinates may be obtained by positioning the UE. The positioning includes, but is not limited to, GPS positioning, beidou positioning, indoor positioning, and the like, and combinations thereof.
X is an integer of 4 or more. Since enough speech collected by the UE to recognize is required to improve the accuracy of the conference recording.
In this case, the UE closer to the sound source position is selected to be determined as the UE to be selected, because if the UE is too far away, the signal of the UE collecting the audio information is weak, and the accuracy of the recorded text identified according to the audio information is necessarily affected.
Step S202, the server determines an obstruction area of the obstruction according to the sound source coordinate, the obstruction coordinate and the appearance feature of the obstruction, and removes y UEs located in the obstruction area from the x UEs to obtain x-y UEs;
step S203, the server obtains x-y recorded texts of the conference scene sent by x-y UEs, and the server sorts the x-y recorded texts to obtain the conference text of the conference scene.
According to the technical scheme, the recording text of the conference scene is carried out through the UE, then the recording text is adjusted, so that the problem that the conference text is inaccurate due to failure of single equipment or inaccurate AI identification is solved, the UE in the obstacle area affected by the obstacle is filtered, the influence of the obstacle on the audio signal is avoided, the quality of collected audio signals is improved, and the identification accuracy of the conference text is improved.
In an alternative, the determining, by the server, an obstruction area of the obstacle according to the sound source coordinates, the obstacle coordinates, and the obstacle appearance characteristics may specifically include:
the server emits a plurality of rays by taking the sound source coordinate as an end point, extracts α rays which intersect with the outer shape of the obstacle from the plurality of rays, calculates α slopes of α rays, extracts a maximum value kmax and a minimum value kmin of α slopes, and determines a slope interval [ kmin, kmax ] as an obstruction area.
Referring to fig. 2a, in the technical solution provided in fig. 2a, the appearance feature of the obstacle 260 may be determined through first picture recognition, and certainly, the appearance feature may also be determined to be changed according to the identification of the conference scene, as shown in fig. 2a, a plurality of rays are emitted, and in case of a sufficient angle, the maximum value and the minimum value of the rays intersecting the appearance feature of the obstacle are the blocking area, as shown in the black area shown in fig. 2a, in the black area, the sound wave may be affected, so that the UE in the area needs to be removed, and the UE is prevented from being affected. The ray calculation mode can be used for conveniently determining the blocking area for irregular blocking objects such as people.
In an alternative arrangement. Removing y UEs located in the blocking area from the x UEs to obtain x-y UEs may specifically include:
the method comprises the steps that a sound source coordinate is a first endpoint, x UEs are used as the other endpoint to obtain x slopes through calculation, z UEs corresponding to z slopes located in kmin and kmax in the x slopes are determined as UEs to be filtered, z distances between the z UEs and the sound source coordinate are calculated, and y UEs corresponding to y distances larger than an obstacle distance (the distance between the sound source coordinate and the obstacle coordinate) in the z distances are determined as y UEs of an obstacle area.
The technical scheme directly distinguishes through the slope, so that the method is convenient to use and simple to calculate, and the blocking area can be calculated without carrying out very complicated operation.
In an optional scheme, the step of the server sorting the x-y recorded texts to obtain the conference text of the conference scene may specifically include:
the server determines the same text information in the x-y recorded texts as the conference text. For example, if there are 4 recorded texts in the x-y recorded texts, and there are 3 recorded texts that are the same in one text, and one text is different, then it is determined that the 3 identical recorded texts are the conference text.
Referring to fig. 3, fig. 3 is a device 30 provided in an embodiment of the present application, where the device 30 includes a processor 301, a memory 302, and a communication interface 303, and the processor 301, the memory 302, and the communication interface 303 are connected to each other through a bus 304.
The memory 302 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 302 is used for related computer programs and data. The communication interface 303 is used to receive and transmit data.
The processor 301 may be one or more Central Processing Units (CPUs), and in the case that the processor 301 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 301 in the device 30 is configured to read the computer program code stored in the memory 302, and perform the following operations:
it should be noted that the implementation of each unit may also correspond to the corresponding description of the method embodiment shown in fig. 1.
Obtaining the sound source coordinates, the obstacle coordinates and the appearance characteristics of the obstacles of a fixed conference scene;
the method comprises the steps that a server obtains a plurality of UE coordinates of a plurality of UE, the server calculates a plurality of distances between a sound source coordinate and the plurality of UE coordinates, and x UE which are smaller than a first threshold value in the plurality of distances are determined as UE to be selected;
the server determines an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removes y UEs located in the obstruction area from the x UEs to obtain x-y UEs;
the server obtains x-y recording texts of the conference scene sent by the x-y UEs, and the server sorts the x-y recording texts to obtain the conference text of the conference scene.
In an optional scheme, the acquiring, by the server, the sound source coordinates and the obstacle coordinates of the conference scene specifically includes:
the method comprises the steps that a server obtains a first picture of a conference scene, identifies and determines a sound source position, an obstacle position and the appearance characteristics of an obstacle of the conference scene for the first picture, and determines a sound source coordinate and an obstacle coordinate according to the sound source position, the obstacle position and a conference scene identifier;
or the server acquires the identification of the conference scene, and determines the sound source coordinate, the obstacle coordinate and the obstacle appearance characteristic of the conference scene according to the identification.
In an optional scheme, the determining, by the server, an obstruction area of the obstacle according to the sound source coordinates, the obstacle coordinates, and the obstacle appearance characteristics specifically includes:
the server emits a plurality of rays by taking the sound source coordinate as an end point, extracts α rays which intersect with the outer shape of the obstacle from the plurality of rays, calculates α slopes of α rays, extracts a maximum value kmax and a minimum value kmin of α slopes, and determines a slope interval [ kmin, kmax ] as an obstruction area.
In an optional scheme, removing y UEs located in the blocking area from among the x UEs to obtain x-y UEs specifically includes:
the method comprises the steps that a sound source coordinate is a first end point, x UE (user equipment) is another end point, x slopes are obtained through calculation, z UE corresponding to z slopes located in kmin and kmax in the x slopes are determined as UE to be filtered, z distances between the z UE and the sound source coordinate are calculated, and y UE corresponding to y distances larger than an obstacle distance in the z distances are determined as y UE in an obstruction area.
The embodiment of the present application further provides a chip system, where the chip system includes at least one processor, a memory and an interface circuit, where the memory, the transceiver and the at least one processor are interconnected by a line, and the at least one memory stores a computer program; the method flow shown in fig. 1 is implemented when the computer program is executed by the processor.
The present application further provides a server, comprising:
the acquisition unit is used for acquiring the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of the fixed conference scene; acquiring a plurality of UE coordinates of a plurality of UEs;
the processing unit is used for calculating a plurality of distances between the sound source coordinates and the plurality of UE coordinates, and determining x UEs which are smaller than a first threshold value in the plurality of distances as the UEs to be selected; determining an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removing y UEs located in the obstruction area from the x UEs to obtain x-y UEs; acquiring x-y recorded texts of the conference scene sent by x-y UE, and settling the x-y recorded texts by the server to obtain the conference text of the conference scene.
For specific implementation steps of the processing unit in the server embodiment of the present application, reference may be made to the description of the embodiment shown in fig. 2, which is not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a network device, the method flow shown in fig. 2 is implemented.
An embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal, the method flow shown in fig. 2 is implemented.
Embodiments of the present application also provide a terminal including a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of the embodiment shown in fig. 2.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. A method of conference recording, the method comprising:
the server acquires the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of a fixed conference scene;
the method comprises the steps that a server obtains a plurality of UE coordinates of a plurality of UE, the server calculates a plurality of distances between a sound source coordinate and the plurality of UE coordinates, and x UE which are smaller than a first threshold value in the plurality of distances are determined as UE to be selected;
the server determines an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removes y UEs located in the obstruction area from the x UEs to obtain x-y UEs;
the server obtains x-y recording texts of the conference scene sent by the x-y UEs, and the server sorts the x-y recording texts to obtain the conference text of the conference scene.
2. The method according to claim 1, wherein the server obtaining the sound source coordinates and the obstacle coordinates of the conference scenario specifically comprises:
the method comprises the steps that a server obtains a first picture of a conference scene, identifies and determines a sound source position, an obstacle position and the appearance characteristics of an obstacle of the conference scene for the first picture, and determines a sound source coordinate and an obstacle coordinate according to the sound source position, the obstacle position and a conference scene identifier;
or the server acquires the identification of the conference scene, and determines the sound source coordinate, the obstacle coordinate and the obstacle appearance characteristic of the conference scene according to the identification.
3. The method according to claim 2, wherein the server determines the obstacle area of the obstacle based on the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics comprises:
the server emits a plurality of rays by taking the sound source coordinate as an end point, extracts α rays which intersect with the outer shape of the obstacle from the plurality of rays, calculates α slopes of α rays, extracts a maximum value kmax and a minimum value kmin of α slopes, and determines a slope interval [ kmin, kmax ] as an obstruction area.
4. The method of claim 3, wherein removing y UEs located in the obstructed area from the x UEs to obtain x-y UEs specifically comprises:
the method comprises the steps that a sound source coordinate is a first end point, x UE (user equipment) is another end point, x slopes are obtained through calculation, z UE corresponding to z slopes located in kmin and kmax in the x slopes are determined as UE to be filtered, z distances between the z UE and the sound source coordinate are calculated, and y UE corresponding to y distances larger than an obstacle distance in the z distances are determined as y UE in an obstruction area.
5. A terminal comprising a processor, memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of the method of any of claims 1-4.
6. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-4.
7. A server, comprising:
the acquisition unit is used for acquiring the sound source coordinates, the obstacle coordinates and the obstacle appearance characteristics of the fixed conference scene; acquiring a plurality of UE coordinates of a plurality of UEs;
the processing unit is used for calculating a plurality of distances between the sound source coordinates and the plurality of UE coordinates, and determining x UEs which are smaller than a first threshold value in the plurality of distances as the UEs to be selected; determining an obstruction area of the obstacle according to the sound source coordinate, the obstacle coordinate and the appearance characteristic of the obstacle, and removing y UEs located in the obstruction area from the x UEs to obtain x-y UEs; acquiring x-y recorded texts of the conference scene sent by x-y UE, and settling the x-y recorded texts by the server to obtain the conference text of the conference scene.
CN202010115493.9A 2020-02-25 2020-02-25 Method for recording conference Pending CN111369997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010115493.9A CN111369997A (en) 2020-02-25 2020-02-25 Method for recording conference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010115493.9A CN111369997A (en) 2020-02-25 2020-02-25 Method for recording conference

Publications (1)

Publication Number Publication Date
CN111369997A true CN111369997A (en) 2020-07-03

Family

ID=71211577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010115493.9A Pending CN111369997A (en) 2020-02-25 2020-02-25 Method for recording conference

Country Status (1)

Country Link
CN (1) CN111369997A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303804A (en) * 2016-07-28 2017-01-04 维沃移动通信有限公司 The control method of a kind of mike and mobile terminal
CN107451110A (en) * 2017-07-10 2017-12-08 珠海格力电器股份有限公司 Method, device and server for generating conference summary

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303804A (en) * 2016-07-28 2017-01-04 维沃移动通信有限公司 The control method of a kind of mike and mobile terminal
CN107451110A (en) * 2017-07-10 2017-12-08 珠海格力电器股份有限公司 Method, device and server for generating conference summary

Similar Documents

Publication Publication Date Title
US11153430B2 (en) Information presentation method and device
WO2019091367A1 (en) App pushing method, device, electronic device and computer-readable storage medium
JP2006190296A (en) Method and apparatus for providing information by using context extracted from multimedia communication system
CN103635954A (en) A system to augment a visual data stream based on geographical and visual information
CN103621131A (en) A method for spatially-accurate location of a device using audio-visual information
CN107705251A (en) Picture joining method, mobile terminal and computer-readable recording medium
US20200118569A1 (en) Conference sound box and conference recording method, apparatus, system and computer storage medium
CN105245355A (en) Intelligent voice shorthand conference system
US20200364204A1 (en) Method for generating terminal log and terminal
CN106203235A (en) Live body discrimination method and device
CN111131616A (en) Audio sharing method based on intelligent terminal and related device
CN104010060A (en) Method and electronic device for recognizing identity of incoming caller
CN111369997A (en) Method for recording conference
CN106713728A (en) Method and system for enhancing scenic spot photographing information
WO2020063168A1 (en) Data processing method, terminal, server and computer storage medium
CN116055762A (en) Video synthesis method and device, electronic equipment and storage medium
EP4210049A1 (en) Audio watermark adding method and device, audio watermark analyzing method and device, and medium
CN104780265B (en) A kind of talking management method and device
CN112788174A (en) Intelligent retrieving method of wireless earphone and related device
CN109873893B (en) Information pushing method and related device
CN111275501A (en) Intelligent valuation method based on building scheme
CN112307075A (en) User relationship identification method and device
CN111356004B (en) Storage method and system of universal video file
CN109413385A (en) A kind of video location monitoring method, system and Cloud Server
CN109040407A (en) Voice acquisition method and device based on mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination