US20150143242A1 - Mobile communication terminal and method thereof - Google Patents
Mobile communication terminal and method thereof Download PDFInfo
- Publication number
- US20150143242A1 US20150143242A1 US14/562,450 US201414562450A US2015143242A1 US 20150143242 A1 US20150143242 A1 US 20150143242A1 US 201414562450 A US201414562450 A US 201414562450A US 2015143242 A1 US2015143242 A1 US 2015143242A1
- Authority
- US
- United States
- Prior art keywords
- audio
- user interface
- feature
- audio data
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04M1/72544—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72427—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72469—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
-
- H04M1/72583—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72442—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- the disclosed embodiments generally relate to a method for providing a user interface modified in accordance to audio data, as well as a module and an apparatus thereof.
- a great advantage of having a music player included is that, instead of two separate units, only one single unit is needed for users asking for a mobile communication terminal as well as a music player.
- the hardware of the mobile communication terminal may be used by the music player as well.
- the display may be used by the music player in order to show the title of the song being played
- the key board may be used in order to control the music player, etc.
- an advantage of the disclosed embodiments is to provide a user interface which is modified in accordance to audio data.
- the disclosed embodiments are directed to a method for providing a user interface of an apparatus, said user interface comprising a number of user interface components, said method comprising
- An advantage of this is that the user interface of the apparatus is made more alive, which will increase the user satisfaction.
- Another advantage is that the user interface may be used at the same time as music visualization effects are shown. This implies that the apparatus may be utilised as usually, although music visualization effects is being shown on the display.
- Still another advantage is that the user interface of the apparatus will vary in accordance to the audio data generated by the music player. This implies that the music player and the other functions of the apparatus are perceived by the user as one apparatus, not as an apparatus which can, for instance, be transformed from a communication apparatus into a music player apparatus.
- the reception, the extraction and the modification may be repeated.
- the user interface components may be 3-D rendered graphical objects.
- An advantage of having 3-D rendered graphical objects is that a more sophisticated user interface may be utilised.
- the 3-D rendered graphical objects may be hardware accelerated.
- the audio visualization effects may be superposed upon the 3-D rendered graphical objects.
- the modification may comprise classifying said extracted audio feature into one of a plurality of predetermined feature representations, and modifying the appearance of at least one of said number of user interface components in accordance to said one predetermined feature representation.
- the extracted audio feature may be classified into one of these predetermined representations. This implies that the classification may be made quicker and less computational power is needed. This is an advantage.
- the modification of said user interface components may be made in accordance to one of a set of user interface (UI) modification themes.
- UI user interface
- a UI modification theme may comprise information of how the extracted audio feature(s) is to be presented in the UI.
- the extracted audio feature(s) may be presented as a histogram superposed on a 3-D rendered UI component, or the extracted audio feature(s) may be presented as a number of circles superposed on a 3-D rendered UI component.
- An advantage of this is that the way in which the modification of the user interface components is made may easily be chosen by the user of the apparatus.
- the set of UI modification themes may be user configurable.
- At least a number of said UI components may be modified, wherein each of said number of UI components may be modified in accordance to each respectively assigned audio feature.
- a first user interface component may be modified according to base frequencies
- a second user interface component may be modified in accordance to treble frequencies.
- the disclosed embodiments are directed to a module comprising an audio feature extractor configured to receive a stream of audio data and to extract at least one feature of said stream of audio data, and a user interface modifier configured to determine user interface modification data based upon said extracted feature.
- An advantage of this second aspect is that one or several of the user interface components may be modified in accordance to the audio data.
- the module according to the second aspect may further comprise an audio detector configured to detect an audio activation signal and to activate said audio feature extractor or said user interface modifier upon detection.
- an advantage of this is that the audio feature extractor and the user interface modifier may be in a low power mode until audio data is being generated.
- an audio activation signal may be transmitted to the audio feature extractor or the UI modifier, and the power mode of the module may then be switched to a high power mode, or, in other words, working mode.
- the power efficiency of the module may be increased by having an audio detector present.
- the module according to the second aspect may further comprise a memory arranged to hold user interface modification settings.
- An advantage of having a memory arranged to hold user interface settings is that no memory capacity of the apparatus is used for holding the user interface settings. This implies that less changes of the apparatus, in which the module is comprised, are needed.
- the module according to the second aspect may further comprise an audio feature classifier configured to classify said at least one feature into one of a set of predetermined feature representations.
- the audio feature classifier can be a hardware module or a software module specialized in this kind of classification, which implies that less time and computational power are needed.
- the module according to the second aspect may further comprise a memory arranged to hold predetermined feature representations.
- the disclosed embodiments are directed to an apparatus comprising a display configured to visualize a user interface comprising a number of user interface components, a music player configured to generate audio data, a module configured to determine user interface modification data, and a graphics engine configured to modify said user interface component in accordance to said determined user interface modification data.
- An advantage of this third aspect is that one or several of the user interface components may be modified in accordance to the audio data.
- an audio activation signal may be transmitted from said music player to said module.
- the module may be in a low power mode until audio data is being generated.
- the power mode of the module may then be switched to a high power mode, or, in other words, working mode.
- the power efficiency of the apparatus may be increased by sending an audio signal to the module.
- the apparatus may be a mobile communication terminal.
- the user interface components may be 3-D rendered objects.
- An advantage of having 3-D rendered graphical objects is that a more sophisticated user interface may be utilised.
- the audio visualization effects may be superposed onto said user interface components.
- the disclosed embodiments are directed to a computer-readable medium having computer-executable components comprising instructions for receiving audio data, extracting at least one audio feature from said audio data, and modifying the appearance of at least one of said number of user interface components in accordance to said extracted audio feature.
- the reception, the extraction and the modification may be repeated.
- the user interface components may be 3-D rendered graphical objects.
- the modification may comprise classifying said found audio feature into a predetermined feature representation, and modifying the appearance of at least one of said number of user interface components in accordance to said predetermined feature representation.
- FIG. 1 is a flow chart of an embodiment of a method for modifying a user interface component in accordance to audio data.
- FIG. 2 schematically illustrates a module according to the disclosed embodiments.
- FIG. 3 schematically illustrates an apparatus according to the disclosed embodiments.
- FIG. 4 illustrates an example of a user interface with user interface components being modified in accordance to audio data.
- FIG. 1 is a flow chart illustrating a method according to the disclosed embodiments describing the general steps of modifying a user interface component in accordance to audio data.
- the audio data is received.
- the audio data may be a current part of a stored audio file being played by a music player, or, alternatively, a current part of an audio stream received by an audio data receiver.
- an audio feature is extracted from the received audio data.
- Such an audio feature may be a frequency spectrum of the audio data.
- one or several user interface components are modified in accordance to the extracted audio feature.
- the third step, 104 may be subdivided into a first substep, 106 , in which the extracted audio feature is classified into a predetermined feature representation. Thereafter, in a second substep, 108 , the user interface component is modified in accordance to the predetermined feature representation.
- a number of user interface component appearance state images may be used. This implies that less computational power is needed in order to modify the user interface components in accordance to the audio data.
- the user interface components can be 3-D rendered objects. Additionally, audio visualization effects can be superposed upon the 3-D rendered objects. Then, when receiving audio data and extracting an audio feature, the audio visualization effects are changed, which means that the appearance of the user interface components vary in accordance to the audio data.
- 2-D objects may be used as user interface components.
- audio visualization effects which varies in accordance to the audio data, may be superposed upon the 2-D objects.
- the size of one or several user interface components may be modified in accordance to the extracted audio features.
- the user interface components may be configured to change size in accordance to the amount of base frequencies in the audio data. In this way, during a drum solo the size of the user interface component will be large, and during a guitar solo the size will be small.
- the colour, the orientation, the shape, the animation speed or other animation-specific attributes, such as zooming level in fractal animation, of the user interface components change in accordance to the audio data.
- environment mapping is utilised, existing solutions for music visualization may be used. This is an advantage since no new algorithms must be developed.
- Another advantage of using so-called environment mapping is that a dynamically changing environment map emphasizes the shape of a 3-D object, making UI components easier to recognize.
- different user interface components may be associated to different frequencies. For instance, when playing a rock song comprising several different frequencies, a first user interface component, such as a “messages” icon, may change in accordance to high frequencies, i.e. treble frequencies, and a second user interface component, such as “contacts” icon, may change in accordance to low frequencies, i.e. base frequencies.
- a first user interface component such as a “messages” icon
- treble frequencies i.e. treble frequencies
- contacts i.e. base frequencies
- the procedure of receiving audio data, 100 , extracting audio feature, 102 , and modifying a UI component in accordance to the extracted audio feature, 104 may be repeated continuously as long as audio data is received.
- the procedure may, for instance, be repeated once every time the display is updated.
- FIG. 2 schematically illustrates a module 200 .
- the module 200 may be a software implemented module or a hardware implemented module, such as an ASIC, or a combination thereof, such as an FPGA circuit.
- Audio data can be input to an audio feature extractor 202 . Thereafter, one or several audio features can be extracted from the audio data, and then the extracted features can be transmitted to a user interface (UI) modifier 204 .
- UI modification data can be generated in the UI modifier 204 based upon the extracted audio feature(s). After having generated UI modification data, this data can be output from the module 200 .
- the UI modification data may be data representing the extracted audio feature(s).
- a graphics engine (not shown) is configured to receive the UI modification data, and based upon this UI modification data and original graphics data, the graphic engine is configured to determine graphics data comprising audio visualization effects.
- the UI modification data may be complete graphics data containing audio visualization effects.
- the graphics engine may be contained within said module 200 .
- the module may further comprise an audio feature classifier 206 .
- the function of the audio feature classifier 206 can be to find characteristic features of the audio signal.
- a characteristic feature may be the amount of audio data corresponding to a certain frequency, such as a base frequency or a treble frequency.
- a number of characteristic features may be determined in the audio feature classifier 206 .
- a memory 208 comprising a number of predetermined feature representations may be present as well.
- a predetermined feature representation may, for instance, be the amount of audio data corresponding to a sound between 20 Hz and 100 Hz.
- the number of predetermined feature representations i.e. the resolution of the classification, may be user configurable, as well as the limits of each of the predetermined feature representations.
- the module 200 may comprise an audio detector 209 configured to receive an audio activation signal.
- the audio activation signal may be transmitted from the music player when the playing of a song is started, or, alternatively, when the radio is switched on.
- an audio activation signal is transmitted to the audio feature extractor 202 , the UI modifier 204 or the audio feature classifier 206 .
- the module 200 may further comprise a memory 210 containing UI modification themes.
- a UI modification theme may comprise information of how the extracted audio feature(s) is to be presented in the UI. For instance, the extracted audio feature(s) may be presented as a histogram superposed on a 3-D rendered UI component, or the extracted audio feature(s) may be presented as a number of circles superposed on a 3-D rendered UI component.
- FIG. 3 schematically illustrates an apparatus 300 , such as a mobile communication terminal, comprising the module 200 , a music player 302 , a graphics engine 304 , a display 306 , optionally a keypad 308 and optionally an audio output 310 , such as a loudspeaker or a head phone output.
- an apparatus 300 such as a mobile communication terminal, comprising the module 200 , a music player 302 , a graphics engine 304 , a display 306 , optionally a keypad 308 and optionally an audio output 310 , such as a loudspeaker or a head phone output.
- audio data and, optionally, an audio activation signal are transmitted from the music player 302 to the module 200 .
- audio data may also be transmitted to the audio output 310 .
- the module 200 is configured to generate UI modification data from extracted audio features of the audio data as is described above.
- the UI modification data generated by the module 200 can be transmitted to the graphics engine 304 .
- the graphics engine 304 can, in turn, be configured to generate graphics data presenting the extracted features of the audio data by using the U I modification data.
- this data may be transmitted to the display 306 , where it is shown to the user of the apparatus 300 .
- graphics engine 304 is comprised within the module 200 , graphics data is transmitted directly from the module 200 to the display 306 .
- FIG. 4 illustrates an example of a user interface 400 with user interface components being modified in accordance to audio data.
- a first user interface component may be illustrated as a “music” icon comprising a 3-D cuboid 402 .
- Audio visualization effects in the form of a frequency diagram 404 can be superposed on the sides of the 3-D cuboid 402 .
- an identifying text “MUSIC” 406 may be available in connection to the 3-D cuboid 402 .
- a second user interface component illustrates a “messages” icon comprising a 3-D cylinder 408 .
- Audio visualization effects in the form of a number of rings 410 a, 410 b , 410 c may be superposed on the top of the 3-D cylinder 408 .
- an identifying text “MESSAGES” 412 may be available in connection to the 3-D cylinder 408 .
- a third user interface component illustrates a “contacts” icon comprising a 3-D cylinder 414 .
- Audio visualization effects in the form of a 2-D frequency representation 416 may be superposed on the top of the 3-D cylinder 414 .
- an identifying text “CONTACTS” 418 may be available in connection to the 3-D cylinder 414 .
- a fourth user interface component illustrates an “Internet” icon comprising a 3-D cuboid 420 .
- Audio visualization effects in the form of a number of stripes 422 a, 422 b , 422 c may be superposed on the sides of the 3-D cuboid 420 .
- an identifying text “Internet” 424 may be available in connection to the 3-D cuboid 420 .
Abstract
A method for providing a user interface of a communication apparatus comprises switching from a low power mode to a working mode upon receiving a stream of audio data; and upon switching from the low power mode to the working mode: extracting at least one audio feature from said stream of audio data, and modifying the appearance of at least one user interface component configured for invoking a function of the communication apparatus, in accordance with said extracted audio feature.
Description
- This application is a continuation of U.S. application Ser. No. 11/548,443, filed on 11 Oct. 2006, which is incorporated herein by reference in its entirety.
- The disclosed embodiments generally relate to a method for providing a user interface modified in accordance to audio data, as well as a module and an apparatus thereof.
- Many mobile communication terminals of today includes a music player, most often a so-called MP3 player and/or radio receiver. A great advantage of having a music player included is that, instead of two separate units, only one single unit is needed for users asking for a mobile communication terminal as well as a music player.
- By including a music player in a mobile communication terminal, some of the hardware of the mobile communication terminal may be used by the music player as well. For instance, the display may be used by the music player in order to show the title of the song being played, the key board may be used in order to control the music player, etc.
- Although a number of hardware synergies may be achieved by running a music player on the same platform as a mobile communication terminal, there is a need to more closely connect the music player to the mobile communication terminal in order to increase the customer satisfaction.
- In view of the above, the disclosed embodiments aim to solve or at least reduce the problems discussed above. In more particular, an advantage of the disclosed embodiments is to provide a user interface which is modified in accordance to audio data.
- Generally, the a method for providing a user interface modified in accordance to extracted audio features, and an associated module and apparatus according to the attached independent claims is provided.
- In a first aspect, the disclosed embodiments are directed to a method for providing a user interface of an apparatus, said user interface comprising a number of user interface components, said method comprising
- receiving audio data, extracting at least one audio feature from said audio data, and modifying the appearance of at least one of said number of user interface components in accordance to said extracted audio feature.
- An advantage of this is that the user interface of the apparatus is made more alive, which will increase the user satisfaction.
- Another advantage is that the user interface may be used at the same time as music visualization effects are shown. This implies that the apparatus may be utilised as usually, although music visualization effects is being shown on the display.
- Still another advantage is that the user interface of the apparatus will vary in accordance to the audio data generated by the music player. This implies that the music player and the other functions of the apparatus are perceived by the user as one apparatus, not as an apparatus which can, for instance, be transformed from a communication apparatus into a music player apparatus.
- In the method according to the first aspect, the reception, the extraction and the modification may be repeated.
- Further, in the method according to the first aspect, the user interface components may be 3-D rendered graphical objects.
- An advantage of having 3-D rendered graphical objects is that a more sophisticated user interface may be utilised.
- In the method according to the first aspect, the 3-D rendered graphical objects may be hardware accelerated.
- An advantage of this is that the responsivity of the 3-D graphical objects of the user interface may be increased, which means that the user interface is quicker.
- In the method according to the first aspect, the audio visualization effects may be superposed upon the 3-D rendered graphical objects.
- In the method according to the first aspect, the modification may comprise classifying said extracted audio feature into one of a plurality of predetermined feature representations, and modifying the appearance of at least one of said number of user interface components in accordance to said one predetermined feature representation.
- By having a number of predetermined feature representations determined in advance, the extracted audio feature may be classified into one of these predetermined representations. This implies that the classification may be made quicker and less computational power is needed. This is an advantage.
- In the method according to the first aspect, the modification of said user interface components may be made in accordance to one of a set of user interface (UI) modification themes.
- A UI modification theme may comprise information of how the extracted audio feature(s) is to be presented in the UI. For instance, the extracted audio feature(s) may be presented as a histogram superposed on a 3-D rendered UI component, or the extracted audio feature(s) may be presented as a number of circles superposed on a 3-D rendered UI component.
- An advantage of this is that the way in which the modification of the user interface components is made may easily be chosen by the user of the apparatus.
- In the method according to the first aspect, the set of UI modification themes may be user configurable.
- In the method according to the first aspect, at least a number of said UI components may be modified, wherein each of said number of UI components may be modified in accordance to each respectively assigned audio feature.
- An advantage of this is that different user interface components may be modified differently. For example, a first user interface component may be modified according to base frequencies, and a second user interface component may be modified in accordance to treble frequencies.
- In a second aspect, the disclosed embodiments are directed to a module comprising an audio feature extractor configured to receive a stream of audio data and to extract at least one feature of said stream of audio data, and a user interface modifier configured to determine user interface modification data based upon said extracted feature.
- An advantage of this second aspect is that one or several of the user interface components may be modified in accordance to the audio data.
- The module according to the second aspect may further comprise an audio detector configured to detect an audio activation signal and to activate said audio feature extractor or said user interface modifier upon detection.
- An advantage of this is that the audio feature extractor and the user interface modifier may be in a low power mode until audio data is being generated. When, for example, audio data is being generated, an audio activation signal may be transmitted to the audio feature extractor or the UI modifier, and the power mode of the module may then be switched to a high power mode, or, in other words, working mode. Hence, the power efficiency of the module may be increased by having an audio detector present.
- The module according to the second aspect may further comprise a memory arranged to hold user interface modification settings.
- An advantage of having a memory arranged to hold user interface settings is that no memory capacity of the apparatus is used for holding the user interface settings. This implies that less changes of the apparatus, in which the module is comprised, are needed.
- The module according to the second aspect may further comprise an audio feature classifier configured to classify said at least one feature into one of a set of predetermined feature representations.
- An advantage of this is that the audio feature classifier can be a hardware module or a software module specialized in this kind of classification, which implies that less time and computational power are needed.
- The module according to the second aspect may further comprise a memory arranged to hold predetermined feature representations.
- In a third aspect, the disclosed embodiments are directed to an apparatus comprising a display configured to visualize a user interface comprising a number of user interface components, a music player configured to generate audio data, a module configured to determine user interface modification data, and a graphics engine configured to modify said user interface component in accordance to said determined user interface modification data.
- An advantage of this third aspect is that one or several of the user interface components may be modified in accordance to the audio data.
- In the apparatus according to the third aspect, an audio activation signal may be transmitted from said music player to said module.
- An advantage of this is that the module may be in a low power mode until audio data is being generated. When audio data is being generated and the audio activation signal is transmitted to the module, the power mode of the module may then be switched to a high power mode, or, in other words, working mode. Hence, the power efficiency of the apparatus may be increased by sending an audio signal to the module.
- In the apparatus according to the third aspect, the apparatus may be a mobile communication terminal.
- In the apparatus according to the third aspect, the user interface components may be 3-D rendered objects.
- An advantage of having 3-D rendered graphical objects is that a more sophisticated user interface may be utilised.
- In the apparatus according to the third aspect, the audio visualization effects may be superposed onto said user interface components.
- In a fourth aspect, the disclosed embodiments are directed to a computer-readable medium having computer-executable components comprising instructions for receiving audio data, extracting at least one audio feature from said audio data, and modifying the appearance of at least one of said number of user interface components in accordance to said extracted audio feature.
- In the computer-readable medium according to the fourth aspect, the reception, the extraction and the modification may be repeated.
- In the computer-readable medium according to the fourth aspect, the user interface components may be 3-D rendered graphical objects.
- In the computer-readable medium according to the fourth aspect, the modification may comprise classifying said found audio feature into a predetermined feature representation, and modifying the appearance of at least one of said number of user interface components in accordance to said predetermined feature representation.
- Other features and advantages of the disclosed embodiments will appear from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
- Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the [element, device, component, means, step, etc.]” are to be interpreted openly as referring to at least one instance of said element, device, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
- The above, as well as additional objects, features and advantages of the disclosed embodiments, will be better understood through the following illustrative and non-limiting detailed description of the disclosed embodiments, with reference to the appended drawings, where the same reference numerals will be used for similar elements, wherein:
-
FIG. 1 is a flow chart of an embodiment of a method for modifying a user interface component in accordance to audio data. -
FIG. 2 schematically illustrates a module according to the disclosed embodiments. -
FIG. 3 schematically illustrates an apparatus according to the disclosed embodiments. -
FIG. 4 illustrates an example of a user interface with user interface components being modified in accordance to audio data. -
FIG. 1 is a flow chart illustrating a method according to the disclosed embodiments describing the general steps of modifying a user interface component in accordance to audio data. - In a first step, 100, audio data is received. The audio data may be a current part of a stored audio file being played by a music player, or, alternatively, a current part of an audio stream received by an audio data receiver.
- Next, in a second step, 102, an audio feature is extracted from the received audio data. Such an audio feature may be a frequency spectrum of the audio data.
- Finally, in a third step, 104, one or several user interface components are modified in accordance to the extracted audio feature.
- The third step, 104, may be subdivided into a first substep, 106, in which the extracted audio feature is classified into a predetermined feature representation. Thereafter, in a second substep, 108, the user interface component is modified in accordance to the predetermined feature representation.
- By using predetermined feature representations, a number of user interface component appearance state images may be used. This implies that less computational power is needed in order to modify the user interface components in accordance to the audio data.
- The user interface components can be 3-D rendered objects. Additionally, audio visualization effects can be superposed upon the 3-D rendered objects. Then, when receiving audio data and extracting an audio feature, the audio visualization effects are changed, which means that the appearance of the user interface components vary in accordance to the audio data.
- Alternatively, 2-D objects may be used as user interface components. As in the case of 3-D rendered objects, audio visualization effects, which varies in accordance to the audio data, may be superposed upon the 2-D objects.
- Alternatively, instead of having superposed audio visualization effects, the size of one or several user interface components may be modified in accordance to the extracted audio features. For instance, the user interface components may be configured to change size in accordance to the amount of base frequencies in the audio data. In this way, during a drum solo the size of the user interface component will be large, and during a guitar solo the size will be small. Other options are that the colour, the orientation, the shape, the animation speed or other animation-specific attributes, such as zooming level in fractal animation, of the user interface components change in accordance to the audio data.
- If so-called environment mapping is utilised, existing solutions for music visualization may be used. This is an advantage since no new algorithms must be developed. Another advantage of using so-called environment mapping is that a dynamically changing environment map emphasizes the shape of a 3-D object, making UI components easier to recognize.
- Optionally, different user interface components may be associated to different frequencies. For instance, when playing a rock song comprising several different frequencies, a first user interface component, such as a “messages” icon, may change in accordance to high frequencies, i.e. treble frequencies, and a second user interface component, such as “contacts” icon, may change in accordance to low frequencies, i.e. base frequencies.
- The procedure of receiving audio data, 100, extracting audio feature, 102, and modifying a UI component in accordance to the extracted audio feature, 104, may be repeated continuously as long as audio data is received. The procedure may, for instance, be repeated once every time the display is updated.
-
FIG. 2 schematically illustrates amodule 200. Themodule 200 may be a software implemented module or a hardware implemented module, such as an ASIC, or a combination thereof, such as an FPGA circuit. - Audio data can be input to an
audio feature extractor 202. Thereafter, one or several audio features can be extracted from the audio data, and then the extracted features can be transmitted to a user interface (UI)modifier 204. UI modification data can be generated in theUI modifier 204 based upon the extracted audio feature(s). After having generated UI modification data, this data can be output from themodule 200. - The UI modification data may be data representing the extracted audio feature(s). Then, a graphics engine (not shown) is configured to receive the UI modification data, and based upon this UI modification data and original graphics data, the graphic engine is configured to determine graphics data comprising audio visualization effects.
- Alternatively, the UI modification data may be complete graphics data containing audio visualization effects. In other words, the graphics engine may be contained within said
module 200. - Optionally, the module may further comprise an
audio feature classifier 206. The function of theaudio feature classifier 206 can be to find characteristic features of the audio signal. Such a characteristic feature may be the amount of audio data corresponding to a certain frequency, such as a base frequency or a treble frequency. Alternatively, if different UI components are corresponding to different characteristic features, a number of characteristic features may be determined in theaudio feature classifier 206. - If an
audio feature classifier 206 is present, amemory 208 comprising a number of predetermined feature representations may be present as well. A predetermined feature representation may, for instance, be the amount of audio data corresponding to a sound between 20 Hz and 100 Hz. The number of predetermined feature representations, i.e. the resolution of the classification, may be user configurable, as well as the limits of each of the predetermined feature representations. - Optionally, the
module 200 may comprise anaudio detector 209 configured to receive an audio activation signal. The audio activation signal may be transmitted from the music player when the playing of a song is started, or, alternatively, when the radio is switched on. When the audio detection signal is received, an audio activation signal is transmitted to theaudio feature extractor 202, theUI modifier 204 or theaudio feature classifier 206. - Optionally, the
module 200 may further comprise amemory 210 containing UI modification themes. A UI modification theme may comprise information of how the extracted audio feature(s) is to be presented in the UI. For instance, the extracted audio feature(s) may be presented as a histogram superposed on a 3-D rendered UI component, or the extracted audio feature(s) may be presented as a number of circles superposed on a 3-D rendered UI component. -
FIG. 3 schematically illustrates anapparatus 300, such as a mobile communication terminal, comprising themodule 200, amusic player 302, agraphics engine 304, adisplay 306, optionally a keypad 308 and optionally anaudio output 310, such as a loudspeaker or a head phone output. - When a song is started in the
music player 302, which start may be made after having received key input actuation data from the keypad 308, audio data and, optionally, an audio activation signal, are transmitted from themusic player 302 to themodule 200. Optionally, audio data may also be transmitted to theaudio output 310. - The
module 200 is configured to generate UI modification data from extracted audio features of the audio data as is described above. The UI modification data generated by themodule 200 can be transmitted to thegraphics engine 304. Thegraphics engine 304 can, in turn, be configured to generate graphics data presenting the extracted features of the audio data by using the U I modification data. - After having determined the graphics data, this data may be transmitted to the
display 306, where it is shown to the user of theapparatus 300. Alternatively, if thegraphics engine 304 is comprised within themodule 200, graphics data is transmitted directly from themodule 200 to thedisplay 306. -
FIG. 4 illustrates an example of auser interface 400 with user interface components being modified in accordance to audio data. - A first user interface component may be illustrated as a “music” icon comprising a 3-
D cuboid 402. Audio visualization effects in the form of a frequency diagram 404 can be superposed on the sides of the 3-D cuboid 402. Moreover, an identifying text “MUSIC” 406 may be available in connection to the 3-D cuboid 402. - A second user interface component illustrates a “messages” icon comprising a 3-
D cylinder 408. Audio visualization effects in the form of a number ofrings D cylinder 408. Moreover, an identifying text “MESSAGES” 412 may be available in connection to the 3-D cylinder 408. - A third user interface component illustrates a “contacts” icon comprising a 3-
D cylinder 414. Audio visualization effects in the form of a 2-D frequency representation 416 may be superposed on the top of the 3-D cylinder 414. Moreover, an identifying text “CONTACTS” 418 may be available in connection to the 3-D cylinder 414. - A fourth user interface component illustrates an “Internet” icon comprising a 3-
D cuboid 420. Audio visualization effects in the form of a number ofstripes D cuboid 420. Moreover, an identifying text “Internet” 424 may be available in connection to the 3-D cuboid 420. - The disclosed embodiments have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the disclosed embodiments, as defined by the appended patent claims.
Claims (1)
1. A method for providing a user interface of a communication apparatus, said method comprising
switching from a low power mode to a working mode upon receiving a stream of audio data; and upon switching from the low power mode to the working mode:
extracting at least one audio feature from said stream of audio data, and
modifying the appearance of at least one user interface component configured for invoking a function of the communication apparatus, in accordance with said extracted audio feature.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/562,450 US20150143242A1 (en) | 2006-10-11 | 2014-12-05 | Mobile communication terminal and method thereof |
US15/711,139 US20180013877A1 (en) | 2006-10-11 | 2017-09-21 | Mobile communication terminal and method thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/548,443 US8930002B2 (en) | 2006-10-11 | 2006-10-11 | Mobile communication terminal and method therefor |
US14/562,450 US20150143242A1 (en) | 2006-10-11 | 2014-12-05 | Mobile communication terminal and method thereof |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/548,443 Continuation US8930002B2 (en) | 2006-10-11 | 2006-10-11 | Mobile communication terminal and method therefor |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/711,139 Continuation US20180013877A1 (en) | 2006-10-11 | 2017-09-21 | Mobile communication terminal and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150143242A1 true US20150143242A1 (en) | 2015-05-21 |
Family
ID=39321401
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/548,443 Active 2031-05-12 US8930002B2 (en) | 2006-10-11 | 2006-10-11 | Mobile communication terminal and method therefor |
US14/562,450 Abandoned US20150143242A1 (en) | 2006-10-11 | 2014-12-05 | Mobile communication terminal and method thereof |
US15/711,139 Abandoned US20180013877A1 (en) | 2006-10-11 | 2017-09-21 | Mobile communication terminal and method thereof |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/548,443 Active 2031-05-12 US8930002B2 (en) | 2006-10-11 | 2006-10-11 | Mobile communication terminal and method therefor |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/711,139 Abandoned US20180013877A1 (en) | 2006-10-11 | 2017-09-21 | Mobile communication terminal and method thereof |
Country Status (1)
Country | Link |
---|---|
US (3) | US8930002B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD760277S1 (en) * | 2013-01-09 | 2016-06-28 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with icon |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090062943A1 (en) * | 2007-08-27 | 2009-03-05 | Sony Computer Entertainment Inc. | Methods and apparatus for automatically controlling the sound level based on the content |
US8933960B2 (en) * | 2009-08-14 | 2015-01-13 | Apple Inc. | Image alteration techniques |
KR20110110434A (en) * | 2010-04-01 | 2011-10-07 | 삼성전자주식회사 | Low power audio play device and mehod |
US9466127B2 (en) | 2010-09-30 | 2016-10-11 | Apple Inc. | Image alteration techniques |
WO2014093713A1 (en) * | 2012-12-12 | 2014-06-19 | Smule, Inc. | Audiovisual capture and sharing framework with coordinated, user-selectable audio and video effects filters |
US9459768B2 (en) | 2012-12-12 | 2016-10-04 | Smule, Inc. | Audiovisual capture and sharing framework with coordinated user-selectable audio and video effects filters |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010055398A1 (en) * | 2000-03-17 | 2001-12-27 | Francois Pachet | Real time audio spatialisation system with high level control |
US20020005108A1 (en) * | 1998-05-15 | 2002-01-17 | Ludwig Lester Frank | Tactile, visual, and array controllers for real-time control of music signal processing, mixing, video, and lighting |
US6397186B1 (en) * | 1999-12-22 | 2002-05-28 | Ambush Interactive, Inc. | Hands-free, voice-operated remote control transmitter |
US20040240686A1 (en) * | 1992-04-27 | 2004-12-02 | Gibson David A. | Method and apparatus for using visual images to mix sound |
US6850252B1 (en) * | 1999-10-05 | 2005-02-01 | Steven M. Hoffberg | Intelligent electronic appliance system and method |
US6972363B2 (en) * | 2002-01-04 | 2005-12-06 | Medialab Solutions Llc | Systems and methods for creating, modifying, interacting with and playing musical compositions |
US20060072768A1 (en) * | 1999-06-24 | 2006-04-06 | Schwartz Stephen R | Complementary-pair equalizer |
US20060133628A1 (en) * | 2004-12-01 | 2006-06-22 | Creative Technology Ltd. | System and method for forming and rendering 3D MIDI messages |
US7158844B1 (en) * | 1999-10-22 | 2007-01-02 | Paul Cancilla | Configurable surround sound system |
US20080026690A1 (en) * | 2006-07-31 | 2008-01-31 | Foxenland Eral D | Method and system for adapting a visual user interface of a mobile radio terminal in coordination with music |
US7373120B2 (en) * | 2002-03-13 | 2008-05-13 | Nokia Corporation | Mobile communication terminal |
US7548791B1 (en) * | 2006-05-18 | 2009-06-16 | Adobe Systems Incorporated | Graphically displaying audio pan or phase information |
US7742609B2 (en) * | 2002-04-08 | 2010-06-22 | Gibson Guitar Corp. | Live performance audio mixing system with simplified user interface |
US7774706B2 (en) * | 2006-03-21 | 2010-08-10 | Sony Corporation | System and method for mixing media content |
US7869892B2 (en) * | 2005-08-19 | 2011-01-11 | Audiofile Engineering | Audio file editing system and method |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3720005A (en) * | 1971-02-12 | 1973-03-13 | Minnesota Mining & Mfg | Audio-visual system |
JP3248981B2 (en) * | 1992-06-02 | 2002-01-21 | 松下電器産業株式会社 | calculator |
US5881101A (en) * | 1994-09-01 | 1999-03-09 | Harris Corporation | Burst serial tone waveform signaling method and device for squelch/wake-up control of an HF transceiver |
US6294799B1 (en) * | 1995-11-27 | 2001-09-25 | Semiconductor Energy Laboratory Co., Ltd. | Semiconductor device and method of fabricating same |
US6154549A (en) * | 1996-06-18 | 2000-11-28 | Extreme Audio Reality, Inc. | Method and apparatus for providing sound in a spatial environment |
US5940078A (en) * | 1997-06-17 | 1999-08-17 | Sun Microsystems, Inc. | Method and apparatus for changing the appearance of icon images on a computer display monitor |
US6263075B1 (en) * | 1997-09-11 | 2001-07-17 | Agere Systems Guardian Corp. | Interrupt mechanism using TDM serial interface |
US6140565A (en) * | 1998-06-08 | 2000-10-31 | Yamaha Corporation | Method of visualizing music system by combination of scenery picture and player icons |
JP2000253111A (en) * | 1999-03-01 | 2000-09-14 | Toshiba Corp | Radio portable terminal |
JP3587088B2 (en) * | 1999-06-15 | 2004-11-10 | ヤマハ株式会社 | Audio system, control method thereof, and recording medium |
US20010056434A1 (en) * | 2000-04-27 | 2001-12-27 | Smartdisk Corporation | Systems, methods and computer program products for managing multimedia content |
US20020029259A1 (en) * | 2000-07-26 | 2002-03-07 | Nec Corporation | Remote operation system and remote operation method thereof |
US6400647B1 (en) * | 2000-12-04 | 2002-06-04 | The United States Of America As Represented By The Secretary Of The Navy | Remote detection system |
GB0127778D0 (en) * | 2001-11-20 | 2002-01-09 | Hewlett Packard Co | Audio user interface with dynamic audio labels |
US7072908B2 (en) * | 2001-03-26 | 2006-07-04 | Microsoft Corporation | Methods and systems for synchronizing visualizations with audio streams |
KR100542129B1 (en) * | 2002-10-28 | 2006-01-11 | 한국전자통신연구원 | Object-based three dimensional audio system and control method |
US7610553B1 (en) * | 2003-04-05 | 2009-10-27 | Apple Inc. | Method and apparatus for reducing data events that represent a user's interaction with a control interface |
JP4127172B2 (en) * | 2003-09-22 | 2008-07-30 | ヤマハ株式会社 | Sound image localization setting device and program thereof |
US7966034B2 (en) * | 2003-09-30 | 2011-06-21 | Sony Ericsson Mobile Communications Ab | Method and apparatus of synchronizing complementary multi-media effects in a wireless communication device |
US7239338B2 (en) * | 2003-10-01 | 2007-07-03 | Worldgate Service, Inc. | Videophone system and method |
US20050091578A1 (en) * | 2003-10-24 | 2005-04-28 | Microsoft Corporation | Electronic sticky notes |
US20050138574A1 (en) * | 2003-12-17 | 2005-06-23 | Jyh-Han Lin | Interactive icon |
US7178111B2 (en) * | 2004-08-03 | 2007-02-13 | Microsoft Corporation | Multi-planar three-dimensional user interface |
US20060156237A1 (en) * | 2005-01-12 | 2006-07-13 | Microsoft Corporation | Time line based user interface for visualization of data |
US8421805B2 (en) * | 2006-02-09 | 2013-04-16 | Dialogic Corporation | Smooth morphing between personal video calling avatars |
CA2650612C (en) * | 2006-05-12 | 2012-08-07 | Nokia Corporation | An adaptive user interface |
US20080022208A1 (en) * | 2006-07-18 | 2008-01-24 | Creative Technology Ltd | System and method for personalizing the user interface of audio rendering devices |
-
2006
- 2006-10-11 US US11/548,443 patent/US8930002B2/en active Active
-
2014
- 2014-12-05 US US14/562,450 patent/US20150143242A1/en not_active Abandoned
-
2017
- 2017-09-21 US US15/711,139 patent/US20180013877A1/en not_active Abandoned
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040240686A1 (en) * | 1992-04-27 | 2004-12-02 | Gibson David A. | Method and apparatus for using visual images to mix sound |
US20020005108A1 (en) * | 1998-05-15 | 2002-01-17 | Ludwig Lester Frank | Tactile, visual, and array controllers for real-time control of music signal processing, mixing, video, and lighting |
US20060072768A1 (en) * | 1999-06-24 | 2006-04-06 | Schwartz Stephen R | Complementary-pair equalizer |
US6850252B1 (en) * | 1999-10-05 | 2005-02-01 | Steven M. Hoffberg | Intelligent electronic appliance system and method |
US7158844B1 (en) * | 1999-10-22 | 2007-01-02 | Paul Cancilla | Configurable surround sound system |
US6397186B1 (en) * | 1999-12-22 | 2002-05-28 | Ambush Interactive, Inc. | Hands-free, voice-operated remote control transmitter |
US20010055398A1 (en) * | 2000-03-17 | 2001-12-27 | Francois Pachet | Real time audio spatialisation system with high level control |
US6972363B2 (en) * | 2002-01-04 | 2005-12-06 | Medialab Solutions Llc | Systems and methods for creating, modifying, interacting with and playing musical compositions |
US7373120B2 (en) * | 2002-03-13 | 2008-05-13 | Nokia Corporation | Mobile communication terminal |
US7742609B2 (en) * | 2002-04-08 | 2010-06-22 | Gibson Guitar Corp. | Live performance audio mixing system with simplified user interface |
US20060133628A1 (en) * | 2004-12-01 | 2006-06-22 | Creative Technology Ltd. | System and method for forming and rendering 3D MIDI messages |
US7869892B2 (en) * | 2005-08-19 | 2011-01-11 | Audiofile Engineering | Audio file editing system and method |
US7774706B2 (en) * | 2006-03-21 | 2010-08-10 | Sony Corporation | System and method for mixing media content |
US7548791B1 (en) * | 2006-05-18 | 2009-06-16 | Adobe Systems Incorporated | Graphically displaying audio pan or phase information |
US20080026690A1 (en) * | 2006-07-31 | 2008-01-31 | Foxenland Eral D | Method and system for adapting a visual user interface of a mobile radio terminal in coordination with music |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD760277S1 (en) * | 2013-01-09 | 2016-06-28 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with icon |
Also Published As
Publication number | Publication date |
---|---|
US8930002B2 (en) | 2015-01-06 |
US20080089525A1 (en) | 2008-04-17 |
US20180013877A1 (en) | 2018-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180013877A1 (en) | Mobile communication terminal and method thereof | |
US10200634B2 (en) | Video generation method, apparatus and terminal | |
KR101874895B1 (en) | Method for providing augmented reality and terminal supporting the same | |
KR102371136B1 (en) | Information display method, and terminal | |
US9086779B2 (en) | Input device | |
US20120306743A1 (en) | Dynamic theme color palette generation | |
CN110111279B (en) | Image processing method and device and terminal equipment | |
JP4609805B2 (en) | Terminal device and program | |
CN101160932A (en) | Mobile communication terminal with horizontal and vertical display of the menu and submenu structure | |
CN106201249B (en) | Display method and display device of notification information | |
WO2009153628A1 (en) | Music browser apparatus and method for browsing music | |
CN1959617A (en) | Method for displaying menus in a portable terminal | |
US10078436B2 (en) | User interface adjusting method and apparatus using the same | |
KR20090077950A (en) | Method for providing an alert signal | |
CN108921918B (en) | Video creation method and related device | |
US20100077356A1 (en) | Methods and apparatus for implementing a device menu | |
KR20090092035A (en) | Method for generating mosaic image and apparatus for the same | |
CN106126175A (en) | The control method of a kind of sound effect parameters and mobile terminal | |
US20230125288A1 (en) | Font adjustment method and apparatus, storage medium, and electronic device | |
CN112114929A (en) | Display apparatus and image display method thereof | |
CN109887523B (en) | Audio data processing method and device for singing application, electronic equipment and storage medium | |
KR20090046137A (en) | Apparatus and method for searching media data | |
US7756549B2 (en) | Dialing screen method and layer structure for a mobile terminal | |
CN111666075B (en) | Multi-device interaction method and system | |
KR100835210B1 (en) | Display method of file and apparatus for portable device using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CONVERSANT WIRELESS LICENSING S.A R.L., LUXEMBOURG Free format text: CHANGE OF NAME;ASSIGNOR:CORE WIRELESS LICENSING S.A.R.L.;REEL/FRAME:043814/0274 Effective date: 20170720 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |