US20190260701A1 - Messaging system - Google Patents
Messaging system Download PDFInfo
- Publication number
- US20190260701A1 US20190260701A1 US15/901,346 US201815901346A US2019260701A1 US 20190260701 A1 US20190260701 A1 US 20190260701A1 US 201815901346 A US201815901346 A US 201815901346A US 2019260701 A1 US2019260701 A1 US 2019260701A1
- Authority
- US
- United States
- Prior art keywords
- message
- event
- user
- receiving
- computer device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/18—Commands or executable codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/42—Mailbox-related aspects, e.g. synchronisation of mailboxes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/107—Computer-aided management of electronic mailing [e-mailing]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/20—Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel
- H04W4/21—Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel for social networking applications
Definitions
- the present disclosure relates to rendering and generating message data to be exchanged between transmitting and receiving devices.
- Messaging systems are widely available, enabling computer devices such as smartphones, tablets and other forms of computer devices to exchange messages via a communication network.
- one or more message server is provided which performs the function of receiving, storing and transmitting messages between computer devices.
- Communication networks can be enabled in a large number of different ways, including wired and wireless. Wireless networks operate using telecommunications protocols, or shortwave protocols such as Wi-Fi or Bluetooth. To use these apps, a sending user composes a message at his device, and identifies one or more other users to receive the message.
- the device When he implements a send function (for example by pushing an icon marked send on his device), the device generates message data, which may involve adding headers and tailers to the message that has been composed by the user, the header including an address or addresses of the receiving users.
- the message is transmitted (it may be broken up into packets for transmission) over the communications network, via the one or more server, to the receiving user.
- a person may not wish to send a video message, but nevertheless may wish to make his message entertaining.
- the present inventors have developed a messaging system and method which addresses these issues.
- the system may comprise a message generating device and a message receiving device in communication via a communication network, the devices as defined in the following.
- a method of rendering message data of a message received at a receiving computer device comprising:
- a method of rendering message data of a message received at a receiving computer device comprising:
- the message data may comprise at least one triggering component, the method comprising detecting the triggering component and triggering an event associated with the triggering component.
- the user perceptible activation may be a visualisation, a haptic event or an audio event.
- the visualisation associated with the second part of the message may be a second viewable element in the second part of the message, or it may be a visual event which is triggered by a viewable or a non-viewable element in the second part of the message.
- First and second viewable elements of the message may be displayed in a message display area, whereas the events which are triggered by the triggering components may be displayed in a separate area, which could typically be larger than the message display area.
- the first part of the message may comprise an additional non-viewable element which can cause the first time period to be extended.
- the first viewable element may be associated with a time period sufficient for certain people to view it, but there may be situations where the sender realises that the person who will be viewing the message will need extra time, and therefore an additional time control element may be inserted.
- Timing control may be determined at the receiving device or at a generating device where the message is generated by a transmitting user.
- the first time period is determined by timing control data forming part of the message data in the message received at the receiving computer device.
- This timing control data may be inserted by the user who has generated the message at the generating device.
- the timing control data may be inserted by the generating device itself responsive to the element selected by the user to compose the message.
- the first time period may be determined by parsing the message at the receiving computing device, wherein the first time period is defined by the parsed message data.
- the first time period is defined by the parsed message data.
- the message parts control the timing, using timing control information which is stored locally at the received side and which is associated with the delimiter or other elements in the message data.
- the first time period is defined by the parsed message data by deriving it from elements of the parsed message data, for example by looking up associated delays in a timing control library.
- the delimiter may form part of the first message part, or may follow the first message part.
- the delimiter may be associated with a delimiter time period which forms all or part of the first time period.
- the triggering component may be provided in the first message part, by the delimiter or the second message part. There may be multiple delimiters, one or more of which could constitute a triggering component. Only one of the elements in the message may provide the triggering component, or some or all of the elements in the message may comprise a triggering component.
- a visual event which may be triggered could be an animation or an image or any other kind of visualisation.
- One type of event which can be triggered is a modification to a three-dimensional avatar which is displayed at the receiving computer device.
- the avatar could be a generic avatar, or it could be an avatar which represents the sender of the message. Alternatively, it could be an avatar representing the receiving user.
- Event data for generating an event may be sent as part of the message data in the message received at the receiving computing device.
- events can be triggered at the receiving device without being sent as part of the message data.
- the event is stored at the receiving computer device in association with a triggering component identifier which identifies the triggering component to trigger the event.
- the triggering component identifier associated with that message is determined and is used to access a local store which holds events in association with triggering component identifiers. In this way, the event to be triggered may be determined.
- Events triggered by triggering components may be generic, or they may be made specific to particular sessions.
- the term session is used herein to denote an open communication pathway between sender and receiver as is known in the art.
- a session exists between two endpoints.
- group chat multiple sessions may exist from a sending user to multiple receiving users. Each session would have a session identifier associated with it.
- Events may be stored in association with session identifiers and triggering component identifiers, whereby an event which is triggered can be uniquely determined for that session.
- This feature also allows events to be uniquely associated with triggering component identifiers for particular receivers. That is, a sender could send a message to multiple receivers.
- receivers could see the message differently because different events would be triggered at their own devices based on the events stored at their local device associated with the triggering component identifier. That is, multiple events may be associated with respective multiple session identifiers for the same triggering component identifier.
- Receiving computer devices may receive update messages which comprise event data for generating an event with an associated triggering component identifier, such that a local store of the receiving device can be updated, with the triggering component identifying its associated event. This allows sending users to control what the receiving user will see when they open a message from that device which has sent the message.
- update messages could be sent from some central server to change events associated with particular triggering components in a more generic fashion, perhaps amongst a social network of users.
- the viewable element in the message data could be humanly readable text or icons or any other selectable characters.
- the triggering component can take the form of an icon having a visual appearance associated with the visual appearance of the triggered event. This is a particularly intuitive and interesting way for a user to receive a message. That is, they see an icon in a message display part on the screen of their device, and an associated animation or image is shown to them in the larger area of their screen separate from the message display area. This provides for a particularly engaging message.
- a haptic event might be vibration of the receiving device while a beating heart is shown.
- An audio event might be bird song while an image of a bird is shown.
- Another aspect of the disclosure provides a method of generating message data at a computer device for transmission to a receiving computer device, the method comprising:
- a method of generating message data at a generating computer device for transmission to a receiving computer device comprising:
- message data comprising the first and second message parts and timing control data which causes the message data received at the receiving computer device to be rendered as a time sequence in which a user perceptible activation associated with the second message part is delayed for a first time period controlled by the timing control data after displaying the first viewable element of the first message part.
- the step of composing the message data may comprise including identification of an event to be triggered by a triggering component of the message.
- the user perceptible event may be a second viewable element selected by the user in the second message part.
- it may be an event (visual, haptic or audio) which is triggered by a non-viewable triggering component element in the second message part.
- the method of generating message data can comprise receiving a delimiter selected by the user, the delimiter defining a separation between the first and second message parts.
- a modified version of a 3D avatar may be displayed, wherein the modification is in accordance with an event to be triggered.
- This avatar may be displayed while composing the message for transmission to make message composition more interesting.
- Another aspect provides a computer device configured to receive a message comprising message data, the computer device comprising:
- processing circuitry configured to execute a computer program which, when executed, causes the processing circuitry to carry out the steps of:
- Another aspect provides a computer device comprising processing circuitry which is configured to execute a computer program which, when executed by the processing circuitry, causes the computer device to carry out the steps of:
- message data comprising the first and second message parts and timing control data which causes the message data received at the receiving computing device to be rendered as a time sequence in which a user perceptible activation associated with the second message part is delayed for a first time period controlled by the timing control data after displaying the first viewable element of the first message part.
- a messaging system comprising a generating computer device configured to generate message data defining a message, and a receiving computer device configured to receive the message data, the receiving computer device comprising processing circuitry configured to execute a computer program which, when executed, causes the receiving computing device to carry out the steps of:
- Another aspect provides a computer program product comprising computer executable instructions which, when executed by a processor causes it to carry out the steps of any of the methods defined above.
- the computer executable instructions could be stored in transitory or non-transitory media.
- FIG. 1 is a schematic block diagram of two devices in communication over a network
- FIG. 2 shows a first screen for displaying a first message part
- FIGS. 3 and 4 show subsequent message parts being received
- FIG. 5 is a timeline of sending and receiving a message in time separated parts
- FIG. 6 illustrates a composition screen
- FIGS. 7 and 8 illustrate further steps in composing message data
- FIGS. 9, 10 and 11 show a sequence of message exchanges wherein background scenes may be iterated
- FIG. 12 is a schematic block diagram showing the architecture of a computing device.
- FIGS. 13 and 14 are flow charts of exemplary embodiments.
- the present description describes a messaging system in which devices which communicate over a network are provided with an “app” which breaks up messages to enable a timeline of message parts and events to be perceived on a receiving device.
- the timeline allows animations or images such as moving 3D avatars, emojis, speech bubbles, and pictures to be incorporated and to be presented as part of a sequence of time separated displayed message parts at the receiving device rather than in a single presentation as is currently the case.
- Events may be other user perceptible activations, such as physical (e.g. vibrations of a device) or audio (e.g. sounds or songs).
- the new messaging app described herein therefore allows for a more creative and engaging experience for the recipient of a message.
- app used herein is used in its conventional form as an abbreviation for an application which is installed on a computer device.
- Such an application may be provided in the form of a downloadable piece of software or code sequence installable on a processor of a device. It is possible to implement the functionality of the messaging app by installing suitable software on the device other than by a downloadable application, for example as part of the operating system.
- the functionality could be implemented in other ways, for example in firmware or dedicated hardware.
- FIG. 1 is a schematic block diagram of a messaging system showing a first user device 10 which in the following description acts as a generating device for generating a message.
- the message 12 is transmitted from the generating device 10 to a receiving device 14 , the message comprising message data which causes a sequence of message parts to be provided to a user at the receiving device in accordance with a timeline.
- the message can be sent from the first device 10 to the second device 14 via any suitable communication means.
- a communication network 16 is shown connected to a server 3 and the first and second devices. Any form of wired or wireless network can be utilised for transmission of the messages.
- messages 12 from the generating device 10 may be sent to the server 3 , where they are stored until the receiving device 14 , to which the messages are addressed, is ready to receive them.
- the receiving device may poll the server for its messages periodically, or when it comes online after a period off line.
- the server may ‘push’ messages to the receiving device(s).
- each device may be capable of operating as a generating device and as a receiving device, however, in the following description it will be assumed that the first device 10 is operating as a generating device and is associated with a user who wishes to generate a message.
- the second device 14 is associated with a user who will be viewing a received message.
- Each device has a display 99 and 101 which enables a generating user to create or generate a message, and the receiving user to view a received message.
- a user ‘viewing’ a message implies that all or a part of the message may be displayed. In some embodiment, part of a message may be perceived physically (haptically) or as audio. Operation of the message app from the perspective of the receiving user will firstly be described.
- FIG. 2 shows the display 101 of the receiving device which has received a complete message comprising a first part 102 , a delimiter 103 and a second part.
- a ‘complete’ message is a message which is intended to be perceived by a user as a self-contained set of message parts which are perceived in a timed sequence and require no input from the receiving user, but which are self-timed and triggered based on message data within the message.
- a message may contain further parts and further delimiters between the further parts. Initially, only the first part 102 of the full sent message 12 is displayed in a message display area 109 . It remains on the display by itself for a time period. If the delimiter 103 is a viewable element, as is the case in FIG.
- the total time period determined from the moment at which the viewable element is displayed to a next activation associated with the second message part can be considered a first time period.
- a next user perceptible activation is triggered, associated with a next part of the message. This could be a visualisation or other kind of effect.
- the delimiter determines a separation between the first part and the next part of the message. Delimiters are explained later, but their core function is to determine a break in the sent message so as to separate the first part from the next part.
- a message might comprise more than one delimiter, and have multiple message parts. One or more of the message parts might cause an event to be triggered on the receiving side. In some embodiments, a delimiter might itself trigger an event. An event is something which is perceptible to the user, and which is triggered by a triggering element.
- An event may be a visualisation such as an animation, image or change in expression of an avatar; a haptic event such as physical vibration or change in state of a device; or an audio event such as a song or voice recording.
- An event may have a time period associated with it and be displayed only for that time period or may be displayed continuously such that it is shown simultaneously with the next part of the message. For example, it could be an animated sequence shown repeatedly in a loop.
- Delimiters may be displayable elements which can be viewed at the receive side, or may be hidden from view on the receive side. FIG. 3 shows a particular example in which the delimiter 103 is visible and causes an event to be triggered.
- the delimiter 103 which has the visual appearance of a four leaf clover in the message display area and an event 104 triggered by the delimiter 103 in a different display area 111 .
- the event is an image which corresponds to the visual appearance of the delimiter, i.e. a four leaf clover.
- the event is displayed in its own area of the display, separately from the viewable element of the message.
- the position of an event may be preconfigured or its placement may be defined by event data.
- the sending user referred to as the first user, is represented on the display 101 as a 3D avatar 105 , and the receiving, or second, user as a 3D avatar 106 .
- the avatar heads may be moving or stationary.
- the sender's head may be background sent in the message, discussed later.
- FIG. 4 shows the display 101 after the further time period, displaying a viewable element: ‘awesome’ of the second part 201 of the message 12 , so that the full sent message is now displayed in the message display area 109 .
- FIG. 5 shows a timeline of events for displaying the message at the receiving device 14 .
- the receiving device 14 has a complete message 12 ready for presentation to a user.
- the complete message is not all rendered immediately.
- the full message 12 comprises a first part 102 , a delimiter 103 and a second part 201 .
- the receiving device 14 processes the message as described more fully later to determine the timeline for presenting the message.
- the viewable component, (Part 1 ), of the first part 102 of the message is displayed by itself on the user display 101 from time t 1 to time t 2 , so in this case for 3 s, and the event 104 associated with delimiter 103 is displayed at time t 2 , until time t 3 , that is for 5 s.
- the event is referred to as ‘Animation 1 ’ although in the case of FIG. 2 it is a static image.
- the second part 201 of the message is activated at time t 3 , 10 s after the user received the full message 12 , while the viewable component remains in the screen.
- a background or any stickers may be displayed prior to the viewable component of the first part 102 of the message, so before time t 1 , or at the same time as the viewable component at time t 1 . Background remains on the screen for the entire duration of the complete first message 12 .
- the timeline is controlled by the message as described later.
- the second user may reply with a message of his own. For example, the second user creates a full message 13 ( FIG. 1 ) which is received at the generating device 10 of the first user at time t 4 .
- the message contains first and second parts and two delimiters 103 .
- Part 1 ′ This is broken into a first part 311 with a viewable component (Part 1 ′), which appears on the first user's display 99 at time t 5 for 5 s, followed by an associated event 312 (Animation 2 ) at time t 6 , and a viewable component (Part 2 ′) of the second part 313 , which is displayed 3 s later at time t 7 for 5 s before being followed by its associated event 314 (Animation 3 ) at time t 8 .
- the delimiters themselves are not shown, nor do they trigger events. Events are instead triggered by the first and/or second parts of the message respectively. An event may be shown simultaneously with the part of the message that triggers it, or at a short time following it as controlled by the message. The conversation may continue in this manner.
- FIG. 6 shows a composition screen 203 on the first user's device 10 .
- a user seeks to compose a message, he goes into the message composition screen 203 .
- user 106 is sending the message.
- this screen he can see a 3D avatar image 106 .
- it is an avatar of himself, but it may be a ‘generic’ avatar.
- this image may have been created by using an image of his head and “decorating” it with add-on parts.
- the 3D avatar can change its orientation or facial expression and these can constitute part of an event or background. While avatars are illustrated and described, it will be appreciated that they are not as essential feature. It is not necessary for any visual indication of the sender or receiver of a message to be displayed.
- a touch screen keyboard 250 which constitutes an input component, can be used by the user to enter text to form a message.
- any suitable input component could be utilised, such as a separate keyboard, mouse or voice activation.
- FIG. 6 shows a first part of the message as entered in text by the user:
- the delimiter 103 a is in the form of an emoji with an unhappy expression.
- This emoji will trigger an event at the receiving device.
- the event which will be triggered is associated with the emoji 103 a, and could be recalled from a library or created by the user as described in more detail later.
- the event triggered by the emoji 103 a is a change in facial expression of the avatar to unhappy.
- FIG. 7 shows the next stage of the message composition.
- cartoon hands 402 stickers
- the user creates a background 401 which can show certain scenery, images etc. and a moving image.
- the hands can be animated to move as part of the background 401 .
- the background data may be included in the first message part data sent to the receiving device.
- the smiling emoji has been associated with a different event to be triggered, in this case a change in facial expression of the avatar to smiling.
- the first and second parts of the message 12 , and the two delimiters 103 a and 103 b are displayed. Note that the background will be displayed at the receive device when this message is displayed.
- the composed message is sent to the receiving device, and the user 105 receives the message (in time separated parts) and replies with a message of his own:
- the length of delay may be due to the length of the presented word.
- This time period may be extended by causing the space between words in the message part to act as a delimiter associated with a predetermined time delay period.
- the next word is then presented after the total delay, which is a combination of that associated with length of the preceding word and the space between the two words.
- a ‘space’ delimiter could be instigated by activation of a space bar key on an input device.
- FIG. 8 shows the user display 99 at time t 8 after receipt of the message from the second user 105 .
- the 3D avatars 105 and 106 can be seen, along with the background 401 chosen by the user and the hand stickers 402 .
- This message 13 contains two delimiters; a first one 103 c (unhappy emoji) after the first part 311 of the message, and a second one 103 d (smiling emoji) at the end of the message.
- the delimiter 103 c is associated with an event which is not shown in FIG. 8 .
- the event would be the avatar head of user 105 exhibiting an unhappy expression.
- the delimiter 103 d is associated with illustrated event 104 a, the avatar 105 smiling.
- the second user may choose to iterate a scene sent by the first user to create the next scene in the timeline. He may choose to keep the background selected by the first user, or choose another background from the available options.
- the stickers sent by the first user may be removed, moved, rotated, resized, or moved in front of or behind other objects. Additional stickers may also be added.
- the avatars of both users may also be moved, rotated, or resized as desired.
- the second user may choose to start with a blank scene, so the background and all the stickers sent by the first user are automatically removed, with only the two avatars shown.
- FIG. 9 shows user display 101 with a first scene 501 received from the first user 107 .
- the sending user has selected the background 401 a and stickers 402 a, 402 b.
- the second user 108 may then choose to generate a responding scene which is sent to the first user 107 .
- FIG. 10 shows the responding scene 502 on user device 99 sent by the second user 108 .
- the second user has changed the background 401 b, kept one sticker 402 a, deleted one sticker 402 b, and added two stickers 402 c, 402 d.
- a series of iterated scenes can be created in this way.
- One user can send multiple consecutive scenes if desired.
- the users can select and replay any scenes which have been sent. These scenes can be viewed either in full-screen mode ( FIG. 9 , FIG. 10 ) or on a timeline view screen ( FIG. 11 ).
- FIG. 11 shows the timeline view screen 601 on user display 99 .
- a fourth scene 503 is being replayed on the screen, having been chosen by the user from a timeline 602 using an arrow 603 .
- the user may scroll through the timeline 602 to choose a scene to replay, with the arrow 603 indicating the chosen scene.
- the 3D avatars 107 and 108 have both been moved from their locations in the previous scenes, and avatar 108 has been rotated.
- Thumbnails of scenes 501 , 502 and 503 along with the other scenes sent and received by the user, are shown in the timeline 602 .
- a green arrow 606 next to the thumbnail indicates a received scene, and a yellow arrow 607 a sent scene.
- An iterate scene button 604 is shown, which the user presses in order to have the last sent scene displayed for him to iterate.
- a blank scene button 605 is also shown, which takes the user to a blank scene showing only the 3D avatars 107 and 108 , with all previous backgrounds 401 and stickers 402 removed.
- Delimiters can be any entry available to a user at the input component, such as the touchscreen keypad 205 , including single letters, words, emojis, paragraph keys, punctuation marks etc.
- Some delimiters act to separate the parts of message from each other so that a message can be displayed as a sequence of time separated parts. Some delimiters additionally have a specific event 104 associated with them in the message 12 . Multiple delimiters can be place in each message, and can be any combination of either viewable and/or non-viewable elements.
- a message part may include a viewable element in the form of an emoji, and an additional non-viewable delimiter which triggers an event.
- Message parts may include time control elements which are not visible. For example a paragraph key would act to extend the time period for which a message part is displayed beyond that associated with the visualisation of the message part.
- Message parts may themselves comprise viewable elements or non-viewable elements which also can trigger events.
- certain words or characters in a message part may trigger certain events. That is, a triggering component for an event may be any part of the message, including the delimiter.
- Triggering components may have identifiers associated with the same event in all devices, by installation of a common library.
- a delimiter from the common library may be a smiling emoji, which causes the 3D avatar 105 of the sender to smile.
- a phrase ‘happy new year’ in a message part may cause fireworks to appear as an animation event on the display.
- triggering components may be personalised to individuals by associating them with an event specific to a user identifier, or to sessions by associating them with an event specific to a session, or by a combinations of these two methods, for example in a group chat.
- a triggering component specific to an individual chat could be the word ‘dog’ which then displays a picture of the receiving user's dog on the display 101 .
- Triggering components may also trigger an uninterrupted sequence of animations or other activations (e.g. haptic or audio).
- An events library 87 , 89 which associates triggering component identifiers with events may be provided on the generate and/or receive side.
- Events to be triggered can also be defined by the user when he composes a message. In that case, when he enters the triggering component into the message, he creates an event (e.g. an animation) which forms part of the message data to be transmitted with the message.
- an event e.g. an animation
- the triggering component is detected on the receive side (that is, when it is the appropriate time to act on the triggering component)
- the event which was composed on the generate side is presented on the receive side. This is an alternative to recalling an event from a library on the receive side.
- a sender may send a triggering component in a message for accessing an event in the library, or send an update to the event library along with a triggering component.
- the timed sequence of presentation of the message (displayed parts and/or events) is determined when the message is generated.
- the timeline itself can be determined when the message is generated at the generating device, or the receiving device 14 . That is, the receiving device 14 may determine the timeline for presenting the message as a sequence of parts, either based on its own parsing of the message or based on timing control data inserted in the message on the generating side.
- the first viewable element in the first part of the message is displayed for a time period which can be governed by the length of text in the first viewable element, or by some default setting on the receiving device.
- the time period for display could be based on the number of characters in the first part of the message. For instance, a shorter word like “hi” may have a short time period, for example 2 seconds, whereas a longer phrase “I like you” may be associated with a longer time period, perhaps 4 seconds.
- the time period could be directly related to the number of characters in the part of the message to be displayed, or could be pre-configured and associated with particular words or phrases to be displayed.
- Such timing data could be held at a local library 92 in the device.
- the delimiter itself may also be associated with a particular time period whether or not the delimiter is displayed. After expiry of the time period, any event associated with the delimiter may be displayed for a duration of a further time period which could be curtailed or continuous. After the whole first time period has lapsed, an activation associated with the second message part may be presented with the first viewable element (and the event in some cases) so that the entire message is now presented.
- the activation could be a second viewable element in the received message part with or without a triggered event, or an event triggered by a non-viewable element in the second message part.
- the first viewable element, the delimiter and the second message part define the timeline at the receiving device, because the receiving device has some embedded information in a parsing component for the message which controls how it is displayed. Note that the event could be shown simultaneously with a message part, or triggered by the delimiter.
- the delimiter itself could be viewable and displayed with the first viewable element, in between the first and second message parts or with the activation of the second message part. It could be displayed at the time at which the event is triggered. Alternatively, the delimiter may itself not be visible (for example it could be a paragraph key which was inserted into the message), but nevertheless it triggers an event.
- the event which is triggered by the triggering component can be accessed from a local events library 87 , 89 or form part of the message itself.
- it may be a common event which is always associated with that triggering component, or a personalised or session specific event associated with that triggering component, as described above.
- Timing control data can define the amount of time for which the first part of the message is displayed, how long any triggered event is displayed for and how long the second activation (the whole message) lasts for.
- This timing control data can be inserted as specific time periods entered by a user when he creates a message.
- a time management component 90 of the message app on the receive side can read the timing control data associated with each part of the message or with the delimiter and control the display period accordingly.
- Time management information may be pre-configured on the receiving device so that the receiving device can manage the timeline as described in accordance with the first way.
- the generating device could generate a timing update for a particular user or a particular session which would override the pre-configured settings on the receiving device for controlling the display of that particular message.
- FIG. 12 shows a schematic block diagram of the architecture of a computer device suitable to act as a generating device and a receiving device.
- the device 10 comprises the display 99 , an input component 97 and a network interface 95 .
- the input component is a touchscreen keypad, it enables a user to generate a message by interacting with the touchscreen keypad 205 to select characters, emojis, stickers, et cetera.
- Other input components may be utilised as described earlier. Such input components are known and therefore will not be described further.
- the network interface 95 enables the device to communicate and transmit and receive messages. Once again, such interfaces are known in the art and will not be described further.
- the device also comprises a processor 93 on which is installed a messaging app as has been described above in the form of executable computer code.
- the messaging app 91 can communicate with the input component for the purpose of formulating a message, and with the display 99 for displaying a received message.
- the app 91 has access to an events library 89 for the purpose of accessing events to be associated with trigger components which are inserted into the message. These events could be accessed automatically when a delimiter is inserted into a message from the events library.
- the events library could incorporate a common library 87 as described earlier and/or a personalised library. Note that the events library can operate when a triggering component is included in a message on the generating side, or when an event is accessed to be displayed on the display when a message is received.
- the word “dog” introduced into a message on the generating side could cause a picture of the receiving user's dog to be animated or displayed at the receiving user's side.
- the word “dog” could be associated with the sending user's dog by virtue of an associated session ID.
- the app includes a timing control component 90 which operates as described above.
- the timing control component 90 can cooperate with the timing control library 92 which can hold pre-configured settings indicating a time period for which certain characters or character combinations or words should be displayed.
- Such preconfigured timing control data can be used at the generating side to formulate the message, or on the receiving side to display the message as already described.
- the timing control data 92 may be updated by an update message which could be session or person-specific.
- Avatars may be generated in any known way, and their facial expressions may be modified as known in the art. According to embodiments of the present invention, the modification is triggered in a different way, that is by a triggering component in the message data of a message. Nevertheless, once triggered, modifications to the expressions of the avatars may be handled in a manner that is known in the art and will therefore not be described further herein.
- the flow chart of FIG. 13 illustrates steps taken at a receiving device which has received a message ready for display.
- the message may have been received from the server 3 by a pull or push mechanism and be prepared for presentation to the user immediately after receipt. Alternatively, the message may be received and buffered at the receiving device until such time as the user wishes to view it.
- Step S 1 denotes the start of a process to present a message to a user, either automatically or through user selection.
- step S 2 the message data is parsed until a delimiter is detected. Once the delimiter is detected, the process proceeds to step S 3 where a delay time period associated with the parsed message data is determined for controlling activation of the message.
- step S 4 any viewable component in the message data parsed thus far is displayed. If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component.
- step S 5 the process determines whether the parsed message data contains a trigger. If it does, the triggered event is generated in step S 6 . Then the process returns back to S 2 to parse the next section of the message data. If the message data does not contain a trigger, no further action is taken with this message data, and the process proceeds directly from S 5 to S 2 to parse the next section of the message data. Note that the delay time period determined at S 4 is used to control the activation of the next message part. The next message part is the next section of parsed message data up to the next delimiter that is detected.
- Step S 7 determines whether an end condition to end the process has been met. For example, an end condition might be that there is no more data to parse at step S 2 , or that an ‘end of message’ indicator has been detected. If so, the process ends at S 8 .
- FIG. 14 shows another example embodiment flow chart. It will be appreciated that the order of steps may be altered. For example, steps S 4 and S 6 could happen together, or S 5 and S 6 could occur before S 4 . In this case, a triggered event may be detected before the delay time period is determined. Any delay associated with the triggered event forms part of the delay time period. Step S 10 denotes the start of a process to present a message to the user.
- step S 11 the message data is parsed until a delimiter is detected.
- step S 12 determines if the parsed message data contains a trigger. If the parsed message data does not contain a trigger, the process proceeds to step S 13 where the time delay period associated with the parsed message data is determined. Any viewable component of the parsed message data is displayed in step S 14 . If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component. Then the process returns to step S 11 and continues to parse the next section of the message data until the next delimiter is detected.
- step S 12 the system determines that the parsed message data does contain a trigger
- the process proceeds to step S 15 , where the event associated with the trigger is determined.
- the time delay period associated with the parsed message data including any delay associated with the triggered event is determined at step S 16 .
- step S 17 any viewable component in the message data parsed thus far is displayed. If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component.
- the triggered event is generated in step S 18 .
- the process then returns to step S 11 to parse the next section of the message data. Note that the delay time periods determined at S 13 and S 16 are used to control the activation of the next message part.
- the next message part is the next section of parsed message data up to the next delimiter that is detected.
- S 19 determines if an end condition is met and if so the process ends at S 20 . It will be appreciated that the order of the steps may be altered. For example, S 18 could occur before or simultaneously with S 17 .
- the first message part may include the detected delimiter, such that the delimiter, if it is a viewable element, is displayed at the same time as the viewable elements in the preceding message data.
- the message parts may exclude the detected delimiter, such that the delimiter, if a viewable element, is displayed at a time after the time delay period of the parsed message data.
- all of the message data may be parsed on receipt of the message at the receiving device before any components are displayed.
- the viewable elements of the message data are determined, along with the time delays associated with each of them. Any events which may be triggered by components of the message data and their associated time delays are also determined.
- the system creates a timeline of viewable elements and events to be displayed or activated on the user device, with the times at which each component occurs being the sum of the time delays associated with the preceding components of the message data. Once the timeline has been devised, it is played out on the user device.
- Step S 1 and S 2 , and S 10 and S 11 must begin the process, but the order of subsequent steps may be altered.
- steps S 4 and S 6 may be performed in parallel, or Step S 18 may be performed before S 17 .
- the exception to this is that, if a step exists to determine if a trigger has been parsed, S 5 and S 15 , any steps which require a trigger to be present in the parsed message data, S 6 , S 15 and S 18 , must succeed it.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- The present disclosure relates to rendering and generating message data to be exchanged between transmitting and receiving devices.
- Messaging systems are widely available, enabling computer devices such as smartphones, tablets and other forms of computer devices to exchange messages via a communication network. Generally, one or more message server is provided which performs the function of receiving, storing and transmitting messages between computer devices. There are now available many different ‘apps’ for enabling communication between users of computer devices in such messaging systems. Communication networks can be enabled in a large number of different ways, including wired and wireless. Wireless networks operate using telecommunications protocols, or shortwave protocols such as Wi-Fi or Bluetooth. To use these apps, a sending user composes a message at his device, and identifies one or more other users to receive the message. When he implements a send function (for example by pushing an icon marked send on his device), the device generates message data, which may involve adding headers and tailers to the message that has been composed by the user, the header including an address or addresses of the receiving users. The message is transmitted (it may be broken up into packets for transmission) over the communications network, via the one or more server, to the receiving user.
- It is commonplace to communicate using such messages. They have evolved from so-called SMS messages to more sophisticated messages which may be sent with animations and images. For example, video messaging is becoming increasingly common.
- While being more engaging, video messaging is expensive in terms of the bandwidth that it uses up, and there may be situations where bandwidth is constrained and the exchange of video messages becomes frustrating.
- In other scenarios, a person may not wish to send a video message, but nevertheless may wish to make his message entertaining.
- The present inventors have developed a messaging system and method which addresses these issues. The system may comprise a message generating device and a message receiving device in communication via a communication network, the devices as defined in the following. According to one aspect there is provided a method of rendering message data of a message received at a receiving computer device, the method comprising:
- a method of rendering message data of a message received at a receiving computer device, the method comprising:
- detecting a first viewable element in a first part of the message data;
- detecting a delimiter in the message data;
- determining a first time period for delaying activation of a second part of the message;
- displaying a first part of the message defined by the delimiter in the message data, the first part comprising the first viewable element; and
- after expiry of the first time period generating a user perceptible activation associated with the second part of the message, to be perceived by a user at the receiving computer device while the first part of the message is displayed.
- The message data may comprise at least one triggering component, the method comprising detecting the triggering component and triggering an event associated with the triggering component.
- The user perceptible activation may be a visualisation, a haptic event or an audio event.
- The visualisation associated with the second part of the message may be a second viewable element in the second part of the message, or it may be a visual event which is triggered by a viewable or a non-viewable element in the second part of the message.
- First and second viewable elements of the message may be displayed in a message display area, whereas the events which are triggered by the triggering components may be displayed in a separate area, which could typically be larger than the message display area.
- The first part of the message may comprise an additional non-viewable element which can cause the first time period to be extended. For example, the first viewable element may be associated with a time period sufficient for certain people to view it, but there may be situations where the sender realises that the person who will be viewing the message will need extra time, and therefore an additional time control element may be inserted.
- Timing control may be determined at the receiving device or at a generating device where the message is generated by a transmitting user. In one embodiment the first time period is determined by timing control data forming part of the message data in the message received at the receiving computer device. This timing control data may be inserted by the user who has generated the message at the generating device. Alternatively, the timing control data may be inserted by the generating device itself responsive to the element selected by the user to compose the message.
- In an alternative embodiment, the first time period may be determined by parsing the message at the receiving computing device, wherein the first time period is defined by the parsed message data. In this embodiment, there is no explicit timing control data in the received message. The message parts control the timing, using timing control information which is stored locally at the received side and which is associated with the delimiter or other elements in the message data. In one embodiment, the first time period is defined by the parsed message data by deriving it from elements of the parsed message data, for example by looking up associated delays in a timing control library.
- The delimiter may form part of the first message part, or may follow the first message part. The delimiter may be associated with a delimiter time period which forms all or part of the first time period.
- The triggering component may be provided in the first message part, by the delimiter or the second message part. There may be multiple delimiters, one or more of which could constitute a triggering component. Only one of the elements in the message may provide the triggering component, or some or all of the elements in the message may comprise a triggering component.
- A visual event which may be triggered could be an animation or an image or any other kind of visualisation. One type of event which can be triggered is a modification to a three-dimensional avatar which is displayed at the receiving computer device. The avatar could be a generic avatar, or it could be an avatar which represents the sender of the message. Alternatively, it could be an avatar representing the receiving user.
- Event data for generating an event may be sent as part of the message data in the message received at the receiving computing device. Alternatively, events can be triggered at the receiving device without being sent as part of the message data. To achieve this, the event is stored at the receiving computer device in association with a triggering component identifier which identifies the triggering component to trigger the event. When a triggering component is detected in a message, the triggering component identifier associated with that message is determined and is used to access a local store which holds events in association with triggering component identifiers. In this way, the event to be triggered may be determined.
- Events triggered by triggering components may be generic, or they may be made specific to particular sessions. The term session is used herein to denote an open communication pathway between sender and receiver as is known in the art. A session exists between two endpoints. In group chat, multiple sessions may exist from a sending user to multiple receiving users. Each session would have a session identifier associated with it. Events may be stored in association with session identifiers and triggering component identifiers, whereby an event which is triggered can be uniquely determined for that session. This feature also allows events to be uniquely associated with triggering component identifiers for particular receivers. That is, a sender could send a message to multiple receivers. These receivers could see the message differently because different events would be triggered at their own devices based on the events stored at their local device associated with the triggering component identifier. That is, multiple events may be associated with respective multiple session identifiers for the same triggering component identifier.
- Receiving computer devices may receive update messages which comprise event data for generating an event with an associated triggering component identifier, such that a local store of the receiving device can be updated, with the triggering component identifying its associated event. This allows sending users to control what the receiving user will see when they open a message from that device which has sent the message. Alternatively, update messages could be sent from some central server to change events associated with particular triggering components in a more generic fashion, perhaps amongst a social network of users.
- The viewable element in the message data could be humanly readable text or icons or any other selectable characters. The triggering component can take the form of an icon having a visual appearance associated with the visual appearance of the triggered event. This is a particularly intuitive and interesting way for a user to receive a message. That is, they see an icon in a message display part on the screen of their device, and an associated animation or image is shown to them in the larger area of their screen separate from the message display area. This provides for a particularly engaging message. One example of a haptic event might be vibration of the receiving device while a beating heart is shown. One example of an audio event might be bird song while an image of a bird is shown.
- Another aspect of the disclosure provides a method of generating message data at a computer device for transmission to a receiving computer device, the method comprising:
- a method of generating message data at a generating computer device for transmission to a receiving computer device, the method comprising:
- receiving at least one first viewable element selected by a user at an input component of the generating computer device, the first element to be viewed in a first part of the message;
- receiving a second part of the message input by the user; and
- composing message data comprising the first and second message parts and timing control data which causes the message data received at the receiving computer device to be rendered as a time sequence in which a user perceptible activation associated with the second message part is delayed for a first time period controlled by the timing control data after displaying the first viewable element of the first message part.
- The step of composing the message data may comprise including identification of an event to be triggered by a triggering component of the message.
- The user perceptible event may be a second viewable element selected by the user in the second message part. Alternatively, it may be an event (visual, haptic or audio) which is triggered by a non-viewable triggering component element in the second message part.
- The method of generating message data can comprise receiving a delimiter selected by the user, the delimiter defining a separation between the first and second message parts.
- While the message data is being generated, a modified version of a 3D avatar may be displayed, wherein the modification is in accordance with an event to be triggered. This avatar may be displayed while composing the message for transmission to make message composition more interesting.
- Another aspect provides a computer device configured to receive a message comprising message data, the computer device comprising:
- processing circuitry configured to execute a computer program which, when executed, causes the processing circuitry to carry out the steps of:
- detecting a first viewable element in a first part of the message data;
- detecting a delimiter in the message data;
- determining a first time period for delaying activation of a second part of the message;
- displaying a first part of the message on a display of the computer device, the first part of the message defined by the delimiter in the message data the first part comprising the first viewable element;
- after expiry of the first time period generating a user perceptible activation associated with the second part of the message, to be perceived by a user at the receiving computing device while the first part of the message is displayed.
- Another aspect provides a computer device comprising processing circuitry which is configured to execute a computer program which, when executed by the processing circuitry, causes the computer device to carry out the steps of:
- receiving at least one first viewable element selected by a user at an input component of the computer device, the first viewable element to be viewed in a first part of the message;
- receiving a second part of the message input by the user;
- composing message data comprising the first and second message parts and timing control data which causes the message data received at the receiving computing device to be rendered as a time sequence in which a user perceptible activation associated with the second message part is delayed for a first time period controlled by the timing control data after displaying the first viewable element of the first message part.
- Another aspect provides a messaging system comprising a generating computer device configured to generate message data defining a message, and a receiving computer device configured to receive the message data, the receiving computer device comprising processing circuitry configured to execute a computer program which, when executed, causes the receiving computing device to carry out the steps of:
- detecting a first viewable element in a first part of the message data;
- detecting a delimiter in the message data;
- determining a first time period for delaying activation of a second part of the message;
- displaying a first part of the message defined by the delimiter in the message data, the first part comprising the first viewable element; and
- after expiry of the first time period generating a user perceptible activation associated with the second part of the message, to be perceived by a user at the receiving computer device while the first part of the message is displayed.
- Another aspect provides a computer program product comprising computer executable instructions which, when executed by a processor causes it to carry out the steps of any of the methods defined above. The computer executable instructions could be stored in transitory or non-transitory media.
- For a better understanding of the present invention and to show how the same may be carried into effect, reference will now be made to the following drawings wherein:
-
FIG. 1 is a schematic block diagram of two devices in communication over a network; -
FIG. 2 shows a first screen for displaying a first message part; -
FIGS. 3 and 4 show subsequent message parts being received; -
FIG. 5 is a timeline of sending and receiving a message in time separated parts; -
FIG. 6 illustrates a composition screen; -
FIGS. 7 and 8 illustrate further steps in composing message data; -
FIGS. 9, 10 and 11 show a sequence of message exchanges wherein background scenes may be iterated; -
FIG. 12 is a schematic block diagram showing the architecture of a computing device; and -
FIGS. 13 and 14 are flow charts of exemplary embodiments. - The present description describes a messaging system in which devices which communicate over a network are provided with an “app” which breaks up messages to enable a timeline of message parts and events to be perceived on a receiving device. The timeline allows animations or images such as moving 3D avatars, emojis, speech bubbles, and pictures to be incorporated and to be presented as part of a sequence of time separated displayed message parts at the receiving device rather than in a single presentation as is currently the case. Events may be other user perceptible activations, such as physical (e.g. vibrations of a device) or audio (e.g. sounds or songs).
- The new messaging app described herein therefore allows for a more creative and engaging experience for the recipient of a message.
- The term “app” used herein is used in its conventional form as an abbreviation for an application which is installed on a computer device. Such an application may be provided in the form of a downloadable piece of software or code sequence installable on a processor of a device. It is possible to implement the functionality of the messaging app by installing suitable software on the device other than by a downloadable application, for example as part of the operating system. The functionality could be implemented in other ways, for example in firmware or dedicated hardware.
-
FIG. 1 is a schematic block diagram of a messaging system showing afirst user device 10 which in the following description acts as a generating device for generating a message. Themessage 12 is transmitted from the generatingdevice 10 to a receivingdevice 14, the message comprising message data which causes a sequence of message parts to be provided to a user at the receiving device in accordance with a timeline. The message can be sent from thefirst device 10 to thesecond device 14 via any suitable communication means. Acommunication network 16 is shown connected to aserver 3 and the first and second devices. Any form of wired or wireless network can be utilised for transmission of the messages. In one exemplary messaging system,messages 12 from the generatingdevice 10 may be sent to theserver 3, where they are stored until the receivingdevice 14, to which the messages are addressed, is ready to receive them. For example, the receiving device may poll the server for its messages periodically, or when it comes online after a period off line. Alternatively, the server may ‘push’ messages to the receiving device(s). There may be multiple devices involved, with thesingle generating device 10 sending a message to multiple receiving devices. Messages which are being sent from one device to another device are addressed in accordance with methods known in the art and which will not be discussed further herein. Each device is associated with a user. It will be appreciated that each device may be capable of operating as a generating device and as a receiving device, however, in the following description it will be assumed that thefirst device 10 is operating as a generating device and is associated with a user who wishes to generate a message. Thesecond device 14 is associated with a user who will be viewing a received message. Each device has adisplay -
FIG. 2 shows thedisplay 101 of the receiving device which has received a complete message comprising afirst part 102, adelimiter 103 and a second part. A ‘complete’ message is a message which is intended to be perceived by a user as a self-contained set of message parts which are perceived in a timed sequence and require no input from the receiving user, but which are self-timed and triggered based on message data within the message. A message may contain further parts and further delimiters between the further parts. Initially, only thefirst part 102 of the full sentmessage 12 is displayed in amessage display area 109. It remains on the display by itself for a time period. If thedelimiter 103 is a viewable element, as is the case inFIG. 3 , it is shown after the time period of thefirst part 102 has lapsed, and may have its own time period associated with it. The total time period determined from the moment at which the viewable element is displayed to a next activation associated with the second message part can be considered a first time period. - At the end of the first time period, a next user perceptible activation is triggered, associated with a next part of the message. This could be a visualisation or other kind of effect. The delimiter determines a separation between the first part and the next part of the message. Delimiters are explained later, but their core function is to determine a break in the sent message so as to separate the first part from the next part. A message might comprise more than one delimiter, and have multiple message parts. One or more of the message parts might cause an event to be triggered on the receiving side. In some embodiments, a delimiter might itself trigger an event. An event is something which is perceptible to the user, and which is triggered by a triggering element. An event may be a visualisation such as an animation, image or change in expression of an avatar; a haptic event such as physical vibration or change in state of a device; or an audio event such as a song or voice recording. An event may have a time period associated with it and be displayed only for that time period or may be displayed continuously such that it is shown simultaneously with the next part of the message. For example, it could be an animated sequence shown repeatedly in a loop. Delimiters may be displayable elements which can be viewed at the receive side, or may be hidden from view on the receive side.
FIG. 3 shows a particular example in which thedelimiter 103 is visible and causes an event to be triggered.FIG. 3 shows thedelimiter 103 which has the visual appearance of a four leaf clover in the message display area and anevent 104 triggered by thedelimiter 103 in adifferent display area 111. In this case, the event is an image which corresponds to the visual appearance of the delimiter, i.e. a four leaf clover. The event is displayed in its own area of the display, separately from the viewable element of the message. The position of an event may be preconfigured or its placement may be defined by event data. - The sending user, referred to as the first user, is represented on the
display 101 as a3D avatar 105, and the receiving, or second, user as a3D avatar 106. The avatar heads may be moving or stationary. The sender's head may be background sent in the message, discussed later. - The
delimiter 103 is displayed for a further time period.FIG. 4 shows thedisplay 101 after the further time period, displaying a viewable element: ‘awesome’ of thesecond part 201 of themessage 12, so that the full sent message is now displayed in themessage display area 109. -
FIG. 5 shows a timeline of events for displaying the message at the receivingdevice 14. At time t0 the receivingdevice 14 has acomplete message 12 ready for presentation to a user. In accordance with embodiments of the invention, the complete message is not all rendered immediately. Thefull message 12 comprises afirst part 102, adelimiter 103 and asecond part 201. The receivingdevice 14 processes the message as described more fully later to determine the timeline for presenting the message. The viewable component, (Part 1), of thefirst part 102 of the message is displayed by itself on theuser display 101 from time t1 to time t2, so in this case for 3 s, and theevent 104 associated withdelimiter 103 is displayed at time t2, until time t3, that is for 5 s. The event is referred to as ‘Animation 1’ although in the case ofFIG. 2 it is a static image. As discussed, there are many different types of event. Thesecond part 201 of the message is activated at time t3, 10 s after the user received thefull message 12, while the viewable component remains in the screen. If a background or any stickers have been sent as part of the message byuser 105, these may be displayed prior to the viewable component of thefirst part 102 of the message, so before time t1, or at the same time as the viewable component at time t1. Background remains on the screen for the entire duration of the completefirst message 12. - This is one example of a timeline—in reality there is a vast number of possible configurations. The timeline is controlled by the message as described later. In a conversation, the second user may reply with a message of his own. For example, the second user creates a full message 13 (
FIG. 1 ) which is received at the generatingdevice 10 of the first user at time t4. The message contains first and second parts and twodelimiters 103. This is broken into afirst part 311 with a viewable component (Part 1′), which appears on the first user'sdisplay 99 at time t5 for 5 s, followed by an associated event 312 (Animation 2) at time t6, and a viewable component (Part 2′) of thesecond part 313, which is displayed 3 s later at time t7 for 5 s before being followed by its associated event 314 (Animation 3) at time t8. Note that in some cases the delimiters themselves are not shown, nor do they trigger events. Events are instead triggered by the first and/or second parts of the message respectively. An event may be shown simultaneously with the part of the message that triggers it, or at a short time following it as controlled by the message. The conversation may continue in this manner. - Composition will now be described. The first user composes a
message 12, which contains first and second parts and one or more delimiters 103, to be sent to the second user.FIG. 6 shows acomposition screen 203 on the first user'sdevice 10. When a user seeks to compose a message, he goes into themessage composition screen 203. In thiscase user 106 is sending the message. In this screen, he can see a3D avatar image 106. In this case, it is an avatar of himself, but it may be a ‘generic’ avatar. Note that this image may have been created by using an image of his head and “decorating” it with add-on parts. The 3D avatar can change its orientation or facial expression and these can constitute part of an event or background. While avatars are illustrated and described, it will be appreciated that they are not as essential feature. It is not necessary for any visual indication of the sender or receiver of a message to be displayed. - As shown in
FIG. 6 , atouch screen keyboard 250, which constitutes an input component, can be used by the user to enter text to form a message. Note that any suitable input component could be utilised, such as a separate keyboard, mouse or voice activation. - The message is formed in parts.
FIG. 6 shows a first part of the message as entered in text by the user: -
- “Hi, I'm locked out ”
- In this case, the delimiter 103 a is in the form of an emoji with an unhappy expression. This emoji will trigger an event at the receiving device. The event which will be triggered is associated with the emoji 103 a, and could be recalled from a library or created by the user as described in more detail later. In this case, the event triggered by the emoji 103 a is a change in facial expression of the avatar to unhappy.
FIG. 7 shows the next stage of the message composition. InFIG. 7 , cartoon hands 402 (stickers) have been added to the image. The user creates a background 401 which can show certain scenery, images etc. and a moving image. In this case the hands can be animated to move as part of the background 401. The background data may be included in the first message part data sent to the receiving device. The user enters a further part of the message: -
- “But I'm ”,
- including another delimiter 103 b, this time in the form of a smiling emoji. The smiling emoji has been associated with a different event to be triggered, in this case a change in facial expression of the avatar to smiling. The first and second parts of the
message 12, and the twodelimiters 103 a and 103 b are displayed. Note that the background will be displayed at the receive device when this message is displayed. - The composed message is sent to the receiving device, and the user 105 receives the message (in time separated parts) and replies with a message of his own:
-
- “OK too bad ”
- “I'll be right there ”
- In the above embodiment, there may be a small delay between the presentation of each word in each message part. Timing control is described in more detail later, but the length of delay may be due to the length of the presented word. This time period may be extended by causing the space between words in the message part to act as a delimiter associated with a predetermined time delay period. The next word is then presented after the total delay, which is a combination of that associated with length of the preceding word and the space between the two words. A ‘space’ delimiter could be instigated by activation of a space bar key on an input device.
-
FIG. 8 shows theuser display 99 at time t8 after receipt of the message from thesecond user 105. The3D avatars background 401 chosen by the user and thehand stickers 402. Thismessage 13 contains two delimiters; a first one 103 c (unhappy emoji) after thefirst part 311 of the message, and a second one 103 d (smiling emoji) at the end of the message. Thedelimiter 103 c is associated with an event which is not shown inFIG. 8 . The event would be the avatar head ofuser 105 exhibiting an unhappy expression. Thedelimiter 103 d is associated with illustratedevent 104 a, theavatar 105 smiling. - The second user may choose to iterate a scene sent by the first user to create the next scene in the timeline. He may choose to keep the background selected by the first user, or choose another background from the available options. The stickers sent by the first user may be removed, moved, rotated, resized, or moved in front of or behind other objects. Additional stickers may also be added. The avatars of both users may also be moved, rotated, or resized as desired. Alternatively, the second user may choose to start with a blank scene, so the background and all the stickers sent by the first user are automatically removed, with only the two avatars shown.
-
FIG. 9 showsuser display 101 with afirst scene 501 received from thefirst user 107. The sending user has selected thebackground 401 a andstickers second user 108 may then choose to generate a responding scene which is sent to thefirst user 107.FIG. 10 shows the respondingscene 502 onuser device 99 sent by thesecond user 108. The second user has changed thebackground 401 b, kept onesticker 402 a, deleted onesticker 402 b, and added twostickers - A series of iterated scenes can be created in this way. One user can send multiple consecutive scenes if desired. The users can select and replay any scenes which have been sent. These scenes can be viewed either in full-screen mode (
FIG. 9 ,FIG. 10 ) or on a timeline view screen (FIG. 11 ). -
FIG. 11 shows thetimeline view screen 601 onuser display 99. Afourth scene 503 is being replayed on the screen, having been chosen by the user from atimeline 602 using anarrow 603. Alternatively, the user may scroll through thetimeline 602 to choose a scene to replay, with thearrow 603 indicating the chosen scene. It can be seen that inscene 503 the3D avatars avatar 108 has been rotated. Thumbnails ofscenes timeline 602. Agreen arrow 606 next to the thumbnail indicates a received scene, and a yellow arrow 607 a sent scene. Reply option buttons are also visible in this view. Aniterate scene button 604 is shown, which the user presses in order to have the last sent scene displayed for him to iterate. Ablank scene button 605 is also shown, which takes the user to a blank scene showing only the3D avatars previous backgrounds 401 andstickers 402 removed. - Delimiters can be any entry available to a user at the input component, such as the touchscreen keypad 205, including single letters, words, emojis, paragraph keys, punctuation marks etc.
- Some delimiters act to separate the parts of message from each other so that a message can be displayed as a sequence of time separated parts. Some delimiters additionally have a
specific event 104 associated with them in themessage 12. Multiple delimiters can be place in each message, and can be any combination of either viewable and/or non-viewable elements. For example, a message part may include a viewable element in the form of an emoji, and an additional non-viewable delimiter which triggers an event. Message parts may include time control elements which are not visible. For example a paragraph key would act to extend the time period for which a message part is displayed beyond that associated with the visualisation of the message part. - Message parts may themselves comprise viewable elements or non-viewable elements which also can trigger events. For example, certain words or characters in a message part may trigger certain events. That is, a triggering component for an event may be any part of the message, including the delimiter. Triggering components may have identifiers associated with the same event in all devices, by installation of a common library. For example, a delimiter from the common library may be a smiling emoji, which causes the
3D avatar 105 of the sender to smile. As another example, a phrase ‘happy new year’ in a message part may cause fireworks to appear as an animation event on the display. Alternatively, triggering components may be personalised to individuals by associating them with an event specific to a user identifier, or to sessions by associating them with an event specific to a session, or by a combinations of these two methods, for example in a group chat. A triggering component specific to an individual chat could be the word ‘dog’ which then displays a picture of the receiving user's dog on thedisplay 101. Triggering components may also trigger an uninterrupted sequence of animations or other activations (e.g. haptic or audio). Anevents library - Events to be triggered can also be defined by the user when he composes a message. In that case, when he enters the triggering component into the message, he creates an event (e.g. an animation) which forms part of the message data to be transmitted with the message. When the triggering component is detected on the receive side (that is, when it is the appropriate time to act on the triggering component), the event which was composed on the generate side is presented on the receive side. This is an alternative to recalling an event from a library on the receive side. Note that a sender may send a triggering component in a message for accessing an event in the library, or send an update to the event library along with a triggering component.
- The timed sequence of presentation of the message (displayed parts and/or events) is determined when the message is generated.
- The timeline itself can be determined when the message is generated at the generating device, or the receiving
device 14. That is, the receivingdevice 14 may determine the timeline for presenting the message as a sequence of parts, either based on its own parsing of the message or based on timing control data inserted in the message on the generating side. - In the first case, when a message is received at the receiving device, it is parsed until the delimiter is detected. The first viewable element in the first part of the message is displayed for a time period which can be governed by the length of text in the first viewable element, or by some default setting on the receiving device. For example, the time period for display could be based on the number of characters in the first part of the message. For instance, a shorter word like “hi” may have a short time period, for example 2 seconds, whereas a longer phrase “I like you” may be associated with a longer time period, perhaps 4 seconds. The time period could be directly related to the number of characters in the part of the message to be displayed, or could be pre-configured and associated with particular words or phrases to be displayed. Such timing data could be held at a
local library 92 in the device. The delimiter itself may also be associated with a particular time period whether or not the delimiter is displayed. After expiry of the time period, any event associated with the delimiter may be displayed for a duration of a further time period which could be curtailed or continuous. After the whole first time period has lapsed, an activation associated with the second message part may be presented with the first viewable element (and the event in some cases) so that the entire message is now presented. The activation could be a second viewable element in the received message part with or without a triggered event, or an event triggered by a non-viewable element in the second message part. In one embodiment, the first viewable element, the delimiter and the second message part define the timeline at the receiving device, because the receiving device has some embedded information in a parsing component for the message which controls how it is displayed. Note that the event could be shown simultaneously with a message part, or triggered by the delimiter. - It will be appreciated that the delimiter itself could be viewable and displayed with the first viewable element, in between the first and second message parts or with the activation of the second message part. It could be displayed at the time at which the event is triggered. Alternatively, the delimiter may itself not be visible (for example it could be a paragraph key which was inserted into the message), but nevertheless it triggers an event.
- Note that the event which is triggered by the triggering component can be accessed from a
local events library - Another way that the timeline can be managed at the receiving device is based on timing control data inserted into the message by the generating device. This timing control data can define the amount of time for which the first part of the message is displayed, how long any triggered event is displayed for and how long the second activation (the whole message) lasts for. This timing control data can be inserted as specific time periods entered by a user when he creates a message. A
time management component 90 of the message app on the receive side can read the timing control data associated with each part of the message or with the delimiter and control the display period accordingly. - Another way that the timeline can be managed at the receiving device is something of a hybrid between the first and second ways described above. Time management information may be pre-configured on the receiving device so that the receiving device can manage the timeline as described in accordance with the first way. However, the generating device could generate a timing update for a particular user or a particular session which would override the pre-configured settings on the receiving device for controlling the display of that particular message.
- Reference will now be made to
FIG. 12 which shows a schematic block diagram of the architecture of a computer device suitable to act as a generating device and a receiving device. Thedevice 10 comprises thedisplay 99, aninput component 97 and anetwork interface 95. Where the input component is a touchscreen keypad, it enables a user to generate a message by interacting with the touchscreen keypad 205 to select characters, emojis, stickers, et cetera. Other input components may be utilised as described earlier. Such input components are known and therefore will not be described further. Thenetwork interface 95 enables the device to communicate and transmit and receive messages. Once again, such interfaces are known in the art and will not be described further. - The device also comprises a
processor 93 on which is installed a messaging app as has been described above in the form of executable computer code. Themessaging app 91 can communicate with the input component for the purpose of formulating a message, and with thedisplay 99 for displaying a received message. Theapp 91 has access to anevents library 89 for the purpose of accessing events to be associated with trigger components which are inserted into the message. These events could be accessed automatically when a delimiter is inserted into a message from the events library. The events library could incorporate acommon library 87 as described earlier and/or a personalised library. Note that the events library can operate when a triggering component is included in a message on the generating side, or when an event is accessed to be displayed on the display when a message is received. Note in this case there is not necessarily a requirement to ‘personalise’ events on both the generate and receive side. For example, the word “dog” introduced into a message on the generating side could cause a picture of the receiving user's dog to be animated or displayed at the receiving user's side. Alternatively, the word “dog” could be associated with the sending user's dog by virtue of an associated session ID. - The app includes a
timing control component 90 which operates as described above. Thetiming control component 90 can cooperate with thetiming control library 92 which can hold pre-configured settings indicating a time period for which certain characters or character combinations or words should be displayed. Such preconfigured timing control data can be used at the generating side to formulate the message, or on the receiving side to display the message as already described. Thetiming control data 92 may be updated by an update message which could be session or person-specific. - One of the events which has been described herein is a modification to the facial expression of the avatar representing the users. Avatars may be generated in any known way, and their facial expressions may be modified as known in the art. According to embodiments of the present invention, the modification is triggered in a different way, that is by a triggering component in the message data of a message. Nevertheless, once triggered, modifications to the expressions of the avatars may be handled in a manner that is known in the art and will therefore not be described further herein.
- One example embodiment is shown in the flow chart of
FIG. 13 . The flow chart ofFIG. 13 illustrates steps taken at a receiving device which has received a message ready for display. The message may have been received from theserver 3 by a pull or push mechanism and be prepared for presentation to the user immediately after receipt. Alternatively, the message may be received and buffered at the receiving device until such time as the user wishes to view it. Step S1 denotes the start of a process to present a message to a user, either automatically or through user selection. - At step S2, the message data is parsed until a delimiter is detected. Once the delimiter is detected, the process proceeds to step S3 where a delay time period associated with the parsed message data is determined for controlling activation of the message. At step S4, any viewable component in the message data parsed thus far is displayed. If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component.
- At step S5, the process determines whether the parsed message data contains a trigger. If it does, the triggered event is generated in step S6. Then the process returns back to S2 to parse the next section of the message data. If the message data does not contain a trigger, no further action is taken with this message data, and the process proceeds directly from S5 to S2 to parse the next section of the message data. Note that the delay time period determined at S4 is used to control the activation of the next message part. The next message part is the next section of parsed message data up to the next delimiter that is detected.
- Step S7 determines whether an end condition to end the process has been met. For example, an end condition might be that there is no more data to parse at step S2, or that an ‘end of message’ indicator has been detected. If so, the process ends at S8.
-
FIG. 14 shows another example embodiment flow chart. It will be appreciated that the order of steps may be altered. For example, steps S4 and S6 could happen together, or S5 and S6 could occur before S4. In this case, a triggered event may be detected before the delay time period is determined. Any delay associated with the triggered event forms part of the delay time period. Step S10 denotes the start of a process to present a message to the user. - At step S11, the message data is parsed until a delimiter is detected. Once the delimiter is detected, step S12 determines if the parsed message data contains a trigger. If the parsed message data does not contain a trigger, the process proceeds to step S13 where the time delay period associated with the parsed message data is determined. Any viewable component of the parsed message data is displayed in step S14. If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component. Then the process returns to step S11 and continues to parse the next section of the message data until the next delimiter is detected.
- If, at step S12, the system determines that the parsed message data does contain a trigger, the process proceeds to step S15, where the event associated with the trigger is determined. The time delay period associated with the parsed message data including any delay associated with the triggered event is determined at step S16. At step S17, any viewable component in the message data parsed thus far is displayed. If the parsed message data is the first message part, it will contain a viewable component for display. If it is a subsequent message part it may or may not contain any viewable component. The triggered event is generated in step S18. The process then returns to step S11 to parse the next section of the message data. Note that the delay time periods determined at S13 and S16 are used to control the activation of the next message part. The next message part is the next section of parsed message data up to the next delimiter that is detected. S19 determines if an end condition is met and if so the process ends at S20. It will be appreciated that the order of the steps may be altered. For example, S18 could occur before or simultaneously with S17.
- The first message part (or subsequent message parts) may include the detected delimiter, such that the delimiter, if it is a viewable element, is displayed at the same time as the viewable elements in the preceding message data. Alternatively, the message parts may exclude the detected delimiter, such that the delimiter, if a viewable element, is displayed at a time after the time delay period of the parsed message data.
- In another embodiment, all of the message data may be parsed on receipt of the message at the receiving device before any components are displayed. The viewable elements of the message data are determined, along with the time delays associated with each of them. Any events which may be triggered by components of the message data and their associated time delays are also determined. The system creates a timeline of viewable elements and events to be displayed or activated on the user device, with the times at which each component occurs being the sum of the time delays associated with the preceding components of the message data. Once the timeline has been devised, it is played out on the user device.
- It should be noted that the order of the steps in the processes shown in
FIGS. 13 and 14 is only an example, and these may be performed in an alternative order or in parallel to one another. Steps S1 and S2, and S10 and S11 must begin the process, but the order of subsequent steps may be altered. For example, steps S4 and S6 may be performed in parallel, or Step S18 may be performed before S17. The exception to this is that, if a step exists to determine if a trigger has been parsed, S5 and S15, any steps which require a trigger to be present in the parsed message data, S6, S15 and S18, must succeed it.
Claims (31)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/901,346 US10374994B1 (en) | 2018-02-21 | 2018-02-21 | Messaging system |
US16/511,361 US10834039B2 (en) | 2018-02-21 | 2019-07-15 | Messaging system |
US17/092,856 US11575630B2 (en) | 2018-02-21 | 2020-11-09 | Messaging system |
US18/105,296 US20230179553A1 (en) | 2018-02-21 | 2023-02-03 | Messaging system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/901,346 US10374994B1 (en) | 2018-02-21 | 2018-02-21 | Messaging system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/511,361 Continuation US10834039B2 (en) | 2018-02-21 | 2019-07-15 | Messaging system |
Publications (2)
Publication Number | Publication Date |
---|---|
US10374994B1 US10374994B1 (en) | 2019-08-06 |
US20190260701A1 true US20190260701A1 (en) | 2019-08-22 |
Family
ID=67477601
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/901,346 Active US10374994B1 (en) | 2018-02-21 | 2018-02-21 | Messaging system |
US16/511,361 Active US10834039B2 (en) | 2018-02-21 | 2019-07-15 | Messaging system |
US17/092,856 Active US11575630B2 (en) | 2018-02-21 | 2020-11-09 | Messaging system |
US18/105,296 Pending US20230179553A1 (en) | 2018-02-21 | 2023-02-03 | Messaging system |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/511,361 Active US10834039B2 (en) | 2018-02-21 | 2019-07-15 | Messaging system |
US17/092,856 Active US11575630B2 (en) | 2018-02-21 | 2020-11-09 | Messaging system |
US18/105,296 Pending US20230179553A1 (en) | 2018-02-21 | 2023-02-03 | Messaging system |
Country Status (1)
Country | Link |
---|---|
US (4) | US10374994B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230262014A1 (en) * | 2022-02-14 | 2023-08-17 | International Business Machines Corporation | Dynamic display of images based on textual content |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI439960B (en) | 2010-04-07 | 2014-06-01 | Apple Inc | Avatar editing environment |
US9854156B1 (en) | 2016-06-12 | 2017-12-26 | Apple Inc. | User interface for camera effects |
DK180859B1 (en) | 2017-06-04 | 2022-05-23 | Apple Inc | USER INTERFACE CAMERA EFFECTS |
US10375313B1 (en) * | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
DK180078B1 (en) | 2018-05-07 | 2020-03-31 | Apple Inc. | USER INTERFACE FOR AVATAR CREATION |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
DK201870623A1 (en) | 2018-09-11 | 2020-04-15 | Apple Inc. | User interfaces for simulated depth effects |
US10674072B1 (en) | 2019-05-06 | 2020-06-02 | Apple Inc. | User interfaces for capturing and managing visual media |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11321857B2 (en) | 2018-09-28 | 2022-05-03 | Apple Inc. | Displaying and editing images with depth information |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
US10911387B1 (en) * | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US11409368B2 (en) * | 2020-03-26 | 2022-08-09 | Snap Inc. | Navigating through augmented reality content |
DK181103B1 (en) | 2020-05-11 | 2022-12-15 | Apple Inc | User interfaces related to time |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US11039074B1 (en) | 2020-06-01 | 2021-06-15 | Apple Inc. | User interfaces for managing media |
US11093120B1 (en) * | 2020-08-12 | 2021-08-17 | Facebook, Inc. | Systems and methods for generating and broadcasting digital trails of recorded media |
USD960899S1 (en) | 2020-08-12 | 2022-08-16 | Meta Platforms, Inc. | Display screen with a graphical user interface |
USD960898S1 (en) | 2020-08-12 | 2022-08-16 | Meta Platforms, Inc. | Display screen with a graphical user interface |
US11256402B1 (en) | 2020-08-12 | 2022-02-22 | Facebook, Inc. | Systems and methods for generating and broadcasting digital trails of visual media |
US11212449B1 (en) | 2020-09-25 | 2021-12-28 | Apple Inc. | User interfaces for media capture and management |
CN112423143B (en) * | 2020-09-30 | 2024-02-20 | 腾讯科技(深圳)有限公司 | Live broadcast message interaction method, device and storage medium |
US11593548B2 (en) | 2021-04-20 | 2023-02-28 | Snap Inc. | Client device processing received emoji-first messages |
US11888797B2 (en) * | 2021-04-20 | 2024-01-30 | Snap Inc. | Emoji-first messaging |
US11531406B2 (en) | 2021-04-20 | 2022-12-20 | Snap Inc. | Personalized emoji dictionary |
KR102567051B1 (en) * | 2021-04-21 | 2023-08-14 | 주식회사 카카오 | Operating method of terminal and terminal |
US11539876B2 (en) | 2021-04-30 | 2022-12-27 | Apple Inc. | User interfaces for altering visual media |
US11778339B2 (en) | 2021-04-30 | 2023-10-03 | Apple Inc. | User interfaces for altering visual media |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5966652A (en) * | 1996-08-29 | 1999-10-12 | Qualcomm Incorporated | System and method for the insertion and extraction of telephone numbers from a wireless text message |
US6353923B1 (en) * | 1997-03-12 | 2002-03-05 | Microsoft Corporation | Active debugging environment for debugging mixed-language scripting code |
US6035206A (en) * | 1997-10-31 | 2000-03-07 | Motorola, Inc. | Method and apparatus for transmitting multiple communication messages on a communication resource |
JP2003022066A (en) * | 2001-07-10 | 2003-01-24 | Sanyo Electric Co Ltd | Communication terminal device and method for displaying space character |
US7428695B2 (en) * | 2001-10-22 | 2008-09-23 | Hewlett-Packard Development Company, L.P. | System for automatic generation of arbitrarily indexed hyperlinked text |
US20030093806A1 (en) * | 2001-11-14 | 2003-05-15 | Vincent Dureau | Remote re-creation of data in a television system |
US7738637B2 (en) * | 2004-07-24 | 2010-06-15 | Massachusetts Institute Of Technology | Interactive voice message retrieval |
US20060056350A1 (en) * | 2004-09-16 | 2006-03-16 | Love Robert T | Method and apparatus for uplink communication in a cellular communication system |
US20060200753A1 (en) * | 2005-03-07 | 2006-09-07 | Rishi Bhatia | System and method for providing data manipulation as a web service |
US8355701B2 (en) * | 2005-11-30 | 2013-01-15 | Research In Motion Limited | Display of secure messages on a mobile communication device |
US7756536B2 (en) * | 2007-01-31 | 2010-07-13 | Sony Ericsson Mobile Communications Ab | Device and method for providing and displaying animated SMS messages |
US7990292B2 (en) * | 2008-03-11 | 2011-08-02 | Vasco Data Security, Inc. | Method for transmission of a digital message from a display to a handheld receiver |
US8532637B2 (en) * | 2008-07-02 | 2013-09-10 | T-Mobile Usa, Inc. | System and method for interactive messaging |
US8385333B2 (en) * | 2009-06-30 | 2013-02-26 | Intel Corporation | Mechanism for clock synchronization |
US20110214088A1 (en) * | 2010-02-26 | 2011-09-01 | Research In Motion Limited | Automatic scrolling of electronic messages |
US8554851B2 (en) * | 2010-09-24 | 2013-10-08 | Intel Corporation | Apparatus, system, and methods for facilitating one-way ordering of messages |
US20120105455A1 (en) * | 2010-10-27 | 2012-05-03 | Google Inc. | Utilizing document structure for animated pagination |
US20120162350A1 (en) * | 2010-12-17 | 2012-06-28 | Voxer Ip Llc | Audiocons |
US8787567B2 (en) * | 2011-02-22 | 2014-07-22 | Raytheon Company | System and method for decrypting files |
KR102020335B1 (en) * | 2012-08-27 | 2019-09-10 | 삼성전자 주식회사 | Operation Method For Message Function And Device supporting the same |
US10079786B2 (en) * | 2012-09-03 | 2018-09-18 | Qualcomm Incorporated | Methods and apparatus for enhancing device messaging |
US20140254466A1 (en) * | 2013-02-21 | 2014-09-11 | Qualcomm Incorporated | Interleaving Advertising Packets For Improved Detectability And Security |
EP2974226B1 (en) * | 2013-03-13 | 2017-10-04 | Unify GmbH & Co. KG | Method, device, and system for communicating a changeability attribute |
US9509763B2 (en) * | 2013-05-24 | 2016-11-29 | Qualcomm Incorporated | Delayed actions for a decentralized system of learning devices |
CN104619036B (en) * | 2013-11-01 | 2018-08-14 | 阿尔卡特朗讯 | Method and apparatus for improving random access procedure in wireless network |
PL3123790T3 (en) * | 2014-03-24 | 2018-10-31 | Ericsson Telefon Ab L M | System and method for activating and deactivating multiple secondary cells |
EP2945107A1 (en) * | 2014-05-15 | 2015-11-18 | Nokia Technologies OY | Display of a notification that identifies a keyword |
US9853926B2 (en) * | 2014-06-19 | 2017-12-26 | Kevin Alan Tussy | Methods and systems for exchanging private messages |
CN107209749A (en) * | 2014-11-25 | 2017-09-26 | 劳德—海拉尔股份有限公司 | The local and timing method and its system broadcasted by peer-to-peer network |
US10187855B2 (en) * | 2014-11-28 | 2019-01-22 | Huawei Technologies Co., Ltd. | Message processing method and apparatus |
US9971666B2 (en) * | 2015-03-06 | 2018-05-15 | Qualcomm Incorporated | Technique of link state detection and wakeup in power state oblivious interface |
US10270903B2 (en) * | 2015-08-21 | 2019-04-23 | Avaya Inc. | Failover announcements |
AU2016316125A1 (en) * | 2015-09-03 | 2018-03-15 | Synthro Inc. | Systems and techniques for aggregation, display, and sharing of data |
EP3371956B1 (en) * | 2015-11-06 | 2024-01-10 | Telefonaktiebolaget LM Ericsson (PUBL) | Geomessaging server, geoinformation server and corresponding methods |
CN105656639B (en) * | 2016-01-08 | 2021-05-14 | 北京小米移动软件有限公司 | Group message display method and device |
CN109314846B (en) * | 2016-05-04 | 2021-08-31 | 捷德移动安全有限责任公司 | Subscriber self-activation device, program, and method |
US11112963B2 (en) * | 2016-05-18 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for messaging |
GB2547290B (en) * | 2016-07-07 | 2019-10-30 | Drayson Tech Europe Ltd | Communications accessory for an electronic device and system comprising an accessory |
US10819663B2 (en) * | 2016-08-18 | 2020-10-27 | Board Of Regents, The University Of Texas System | Interactive mobile service for deploying automated protocols |
GB2554638B (en) * | 2016-09-28 | 2019-12-04 | Advanced Risc Mach Ltd | Error detection in communication networks |
US11113317B2 (en) * | 2016-09-29 | 2021-09-07 | Micro Focus Llc | Generating parsing rules for log messages |
EP3406052B1 (en) * | 2016-12-27 | 2020-02-12 | Chicago Mercantile Exchange, Inc. | Message processing protocol which mitigates manipulative messaging behavior |
US11228549B2 (en) * | 2017-04-14 | 2022-01-18 | International Business Machines Corporation | Mobile device sending format translation based on message receiver's environment |
-
2018
- 2018-02-21 US US15/901,346 patent/US10374994B1/en active Active
-
2019
- 2019-07-15 US US16/511,361 patent/US10834039B2/en active Active
-
2020
- 2020-11-09 US US17/092,856 patent/US11575630B2/en active Active
-
2023
- 2023-02-03 US US18/105,296 patent/US20230179553A1/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230262014A1 (en) * | 2022-02-14 | 2023-08-17 | International Business Machines Corporation | Dynamic display of images based on textual content |
US11902231B2 (en) * | 2022-02-14 | 2024-02-13 | International Business Machines Corporation | Dynamic display of images based on textual content |
Also Published As
Publication number | Publication date |
---|---|
US10834039B2 (en) | 2020-11-10 |
US20190342244A1 (en) | 2019-11-07 |
US11575630B2 (en) | 2023-02-07 |
US10374994B1 (en) | 2019-08-06 |
US20230179553A1 (en) | 2023-06-08 |
US20210058351A1 (en) | 2021-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11575630B2 (en) | Messaging system | |
US9094571B2 (en) | Video chatting method and system | |
EP1689155B1 (en) | Method and system to process video effects | |
US20100017483A1 (en) | Multi-topic instant messaging chat session | |
CN111050222B (en) | Virtual article issuing method, device and storage medium | |
CN108055593A (en) | A kind of processing method of interactive message, device, storage medium and electronic equipment | |
CA2385619A1 (en) | Messaging application user interface | |
CN111385632B (en) | Multimedia interaction method, device, equipment and medium | |
US20090157223A1 (en) | Robot chatting system and method | |
KR20130049416A (en) | Method for providing instant messaging service using dynamic emoticon and mobile phone therefor | |
US10685642B2 (en) | Information processing method | |
CN106105172A (en) | Highlight the video messaging do not checked | |
US9705842B2 (en) | Integrating communication modes in persistent conversations | |
CN113157366A (en) | Animation playing method and device, electronic equipment and storage medium | |
EP3172713A1 (en) | A chat system | |
CN110704647A (en) | Content processing method and device | |
CN114025180A (en) | Game operation synchronization system, method, device, equipment and storage medium | |
US20150281157A1 (en) | Delivering an Action | |
CN109947506B (en) | Interface switching method and device and electronic equipment | |
CN114025181A (en) | Information display method and device, electronic equipment and storage medium | |
CN113536147B (en) | Group interaction method, device, equipment and storage medium | |
CN108989191B (en) | Method for withdrawing picture file, control method and device thereof, and mobile terminal | |
CN110224924B (en) | State updating method and device, storage medium and electronic device | |
KR100481588B1 (en) | A method for manufacuturing and displaying a real type 2d video information program including a video, a audio, a caption and a message information | |
CN114040213A (en) | Task processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: KING.COM LTD., MALTA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VIKLUND, DAVID;LOURIAGLI, DRISS;LUNDWALL, PONTUS;REEL/FRAME:045950/0369 Effective date: 20180222 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |