WO2015021805A1 - Audio calling method and device thereof - Google Patents

Audio calling method and device thereof Download PDF

Info

Publication number
WO2015021805A1
WO2015021805A1 PCT/CN2014/078648 CN2014078648W WO2015021805A1 WO 2015021805 A1 WO2015021805 A1 WO 2015021805A1 CN 2014078648 W CN2014078648 W CN 2014078648W WO 2015021805 A1 WO2015021805 A1 WO 2015021805A1
Authority
WO
WIPO (PCT)
Prior art keywords
modulating
information
trigger condition
audio file
sound trigger
Prior art date
Application number
PCT/CN2014/078648
Other languages
French (fr)
Inventor
Xiayu WU
Lian GAO
Xuejian JIANG
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015021805A1 publication Critical patent/WO2015021805A1/en

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3225Data transfer within a gaming system, e.g. data sent between gaming machines and users
    • G07F17/323Data transfer within a gaming system, e.g. data sent between gaming machines and users wherein the player is informed, e.g. advertisements, odds, instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present disclosure relates to information technology field, and more particularly to an audio calling method and a device thereof.
  • Audio files are often called during the application process of some software.
  • an audio calling method for the software is executed as following: configuring a single audio file or a set of multiple audio files to a software logic (so-called sound trigger condition) which is needed to sound so as to correspond with it, and therein the software logics are various such as art dynamic frames, process trigger events, and the like.
  • the called audio files are corresponded to the software logics, and the audio files are individual.
  • the audio files must take up much storage space, and take up a great deal of memory during the calling process which reduces the program running efficiency.
  • the present disclosure provides an audio calling method and a device thereof, configured to reuse audio files, which can reduce storage space taken up by the audio files, and reduce the taken memory thereby improving program running efficiency.
  • An audio calling method comprises: running a program process, if a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition; modulating the audio file according to the modulating information; and playing the modulated audio file.
  • An audio calling device includes a hardware processor and a non- transitory storage medium accessible to the processor.
  • the non-transitory storage medium is configured to store units including: a trigger determination unit, configured to determine if a sound trigger condition being satisfied during a program process running; an obtaining unit, configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition, when the sound trigger condition is satisfied; a modulation unit, configured to modulate the audio file according to the modulating information obtained by the obtaining unit; and a playing unit, configured to play the modulated audio file.
  • a trigger determination unit configured to determine if a sound trigger condition being satisfied during a program process running
  • an obtaining unit configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition, when the sound trigger condition is satisfied
  • a modulation unit configured to modulate the audio file according to the modulating information obtained by the obtaining unit
  • a playing unit configured to play the modulated audio file
  • the present disclosure calls the audio files and the modulating information as well, and then processes the audio files according to the modulating information, and finally plays them, thus the audio files can be reused in different scenes if different modulating information is configured. Because the data size for storing the modulating information is smaller than that for the audio files, the solution of the present disclosure can reuse the audio files with limited additional memory. Thus, the solution reduces the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
  • FIG. 1 is a flowchart of the method according to an embodiment of the present disclosure
  • FIG. 2 is schematic view of the levels of the method according to an embodiment of the present disclosure
  • Fig. 3 is a block diagram of the device according to an embodiment of the present disclosure.
  • Fig. 4 is a block diagram of the device according to another embodiment of the present disclosure.
  • module or “unit” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable gate array
  • module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
  • the exemplary environment may include a server, a client, and a communication network.
  • the server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • information exchange such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
  • the communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients.
  • communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
  • the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
  • the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device.
  • the client may include a network access device.
  • the client may be stationary or mobile.
  • a server may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines.
  • a server may also include one or more processors to execute computer programs in parallel.
  • the embodiment of the present disclosure provides an audio calling method including the following steps:
  • Step 101 running a program process, if a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition.
  • the sound trigger condition above namely is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
  • the modulating information mentioned above is any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here.
  • the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
  • the embodiment provides a configuring solution of the modulating information is that, the method of obtaining modulating information corresponding to the sound trigger condition includes obtaining the modulating information configured synchronously in the sound trigger condition. It should be understood that, this solution can be achieved if only a corresponding relationship is existed between the sound trigger condition and modulating information, which can be associated with certain information so as to be captured by the device.
  • the configuring solution described above is merely one preferable embodiment, but not the one.
  • one of the sound trigger conditions can be art dynamic frames or a process trigger events, etc.
  • the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied.
  • the present disclosure provides solution as following: the method of obtaining the audio file corresponding to the sound trigger condition includes obtaining two or more audio files corresponding to the sound trigger condition.
  • each audio file corresponding to the sound trigger condition has one or more modulating information accordingly.
  • Step 102 modulating the audio file according to the modulating information.
  • each piece of modulating information includes two or more pieces of modulating sub -information which are marked with individual valid period.
  • the method of modulating the audio file according to the modulating information includes modulating the audio file corresponding to the modulating sub- information according to time order of the sub-information in their valid periods.
  • an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then method of modulating the audio file according to the modulating information includes modulating the audio file corresponding to the modulating sub-information, according to the parameter of the modulation
  • the modification rule may include rules that change the parameter according to the game scenario in which the audio file is played.
  • a modification rule may include a trigger condition when the audio file is played.
  • an audio file package corresponding to a specific action may include a few audio files that may be randomly selected when the player performs the action.
  • the trigger condition may force the previously audio file to stop abruptly and completely.
  • the audio file package may further include a second audio file that simulates the player shouting when the player performs the action. For example, the player may shout "Ah" when the player uses a sword to fight the enemy character, which may be randomly played if certain condition is met. Accordingly, the modification rule may include a mute condition that instructs the terminal device not to play the second audio file.
  • the audio file package may include two types of audio files.
  • the first type of audio files relates to an input from the player while the second type of audio files is not related to the input from the player.
  • the first type of audio files may relate to hitting, running, getting hit, receiving a punch, or other action performed or received by the player.
  • the second types of audio files may relate to background music or other audio played in the background.
  • the modification rule may include a priority parameter that instructs the terminal device to extract the first type of audio files into the memory directly.
  • the first type of audio files may require a timely response from the terminal device because the corresponding actions are time sensitive.
  • the second type of audio files may allow some delay for the terminal device to respond because they are not time sensitive.
  • the terminal device may extract the first type of audio files directly to the memory.
  • the terminal device may stream the second type of audio files because the music and ambient sounds are usually played in the background and may be delayed without affect gaming experience.
  • Step 103 playing the modulated audio file.
  • the audio file and the modulating information are called, and the audio file will be played after processed according to the
  • modulating information can be reused in different scenes if different modulating information is configured. Due to the data quantity of the modulating information are smaller than that of the audio files obviously, thus the solution of the present disclosure can reuse the audio files in a save condition, thereby reducing the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
  • game software is served as audio application, which is software with high-quality audio and complex sound effect.
  • audio application is software with high-quality audio and complex sound effect.
  • the embodiments of the present disclosure are applicable to any software which is needed to call audio, but not to be limited to this game software.
  • the solution of this embodiment is: binding the sound trigger condition together with modulating information accordingly; modulating the sound according to the modulating information when the sound achieves the trigger condition; and playing the modulated sound.
  • the first level is the storage level including audio files, modulating information and trigger conditions.
  • the second level is the judgment and process logic for determining if the trigger condition is achieved and then modulating and playing the audio files.
  • the third level accomplishes playing the modulated audio files.
  • These three sound scripts can be the same audio file or the mapping of the audio file, or different audio files, so that the sound is not counted as a unit with a file or a called mapping, but a unit with a binding relationship.
  • a temporary program is established, and then modulating information which is configured when binding is processed.
  • the present disclosure also provides an audio calling device 300, as shown in Fig. 3.
  • the device 300 includes a hardware processor 310 and a non-transitory storage medium 320 accessible to the hardware processor 310.
  • the a non-transitory storage medium 320 is configured to store at least the following units:
  • a trigger determination unit 301 configured to determine if a sound trigger condition is satisfied during a running program process.
  • the sound trigger condition is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
  • An obtaining unit 302 configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition.
  • the modulating information mentioned above can be any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here.
  • the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
  • a modulation unit 303 configured to modulate and processing the audio file according to the modulating information obtained by the obtaining unit 302.
  • a playing unit 304 configured to play the modulated audio file and processed by the modulation unit 303.
  • the embodiment provides a configuring solution of the modulating information is that, the obtaining unit 302 is configured to obtain the modulating information configured synchronously in the sound trigger condition.
  • one of the sound trigger conditions can be art dynamic frames or a process trigger events, etc.
  • the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied.
  • the present disclosure provides solution as following: the obtaining unit 302 is used for obtaining two or more audio files corresponding to the sound trigger condition.
  • the obtaining unit 302 is further used for obtaining one or more modulating information corresponded to each audio file which is corresponding to the sound trigger condition.
  • each piece of modulating information obtained by the obtaining unit 302 includes two or more pieces of modulating sub- information which are marked with individual valid period.
  • the modulation unit 303 is used for modulating the audio file
  • an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then the modulation unit 303 is configured to modulate and processing the audio file corresponding to the modulating sub-information, according to the parameter of the modulation information and the modification rule.
  • the present disclosure also provides an audio calling device according to another embodiment, as shown in Fig. 4.
  • the device may be any terminal device such as a mobile phone, a Tablet PC, a PDA (Personal Digital Assistant), a POS (Point of Sales), a car PC, or any computing device having a processor.
  • a mobile phone such as a mobile phone, a Tablet PC, a PDA (Personal Digital Assistant), a POS (Point of Sales), a car PC, or any computing device having a processor.
  • a mobile phone such as a mobile phone, a Tablet PC, a PDA (Personal Digital Assistant), a POS (Point of Sales), a car PC, or any computing device having a processor.
  • Fig. 4 is a block diagram of a partial device related to a mobile phone.
  • the mobile phone includes a radio frequency (RF) circuit 410, a memory 420, an input unit 430, a display unit 440, a sensor 450, an audio circuit 460, a wireless fidelity (WiFi) module 470, a processor 480, and a power 490, etc.
  • RF radio frequency
  • a memory 420 for example the mobile phone includes a radio frequency (RF) circuit 410, a memory 420, an input unit 430, a display unit 440, a sensor 450, an audio circuit 460, a wireless fidelity (WiFi) module 470, a processor 480, and a power 490, etc.
  • WiFi wireless fidelity
  • the RF circuit 410 is configured to receive and sending signals during calling or process of receiving and sending message. Specially, the RF circuit 410 will receive downlink information from the base station and send it to the processor 480; or send uplink data to the base station.
  • the RF circuit 410 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a diplexer, and the like.
  • the RF circuit 40 can
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 420 is configured to store software program and module which will be run by the processor 480, so as to perform multiple functional applications of the mobile phone and data processing.
  • the memory 430 mainly includes storing program area and storing data area.
  • the storing program area can store the operating system, at least one application program with required function (such as sound playing function, image playing function, etc.).
  • the storing data area can store data established by mobile phone according to actual using demand (such as audio data, phonebook, etc.)
  • the memory 420 can be high-speed random access memory, or nonvolatile memory, such as disk storage, flash memory device, or other volatile solid-state memory storage devices.
  • the input unit 430 is configured to receive the entered number or character information, and the entered key signal related to user setting and function control of the mobile phone 400.
  • the input unit 430 includes a touch panel 431 or other input devices 432.
  • the touch panel 431 is called as touch screen, which can collect user's touch operations thereon or nearby (for example the operations generated by fingers of user or stylus pen, and the like, touching on the touch panel 431 or touching near the touch panel 431), and drive the corresponding connection device according to the preset program.
  • the touch panel 431 includes two portions including a touch detection device and a touch controller.
  • the touch detection device is configured to detect touch position of the user and detecting signals accordingly, and then sending the signals to the touch controller.
  • the touch controller receives touch information from the touch detection device, and converts it to contact coordinates which are to be sent to the processor 480, and then receives command sent by the processor 480 to perform.
  • the input unit 430 can include, but is not limited to, other input devices 432, such as one or more selected from physical keyboard, function keys (such as volume control keys, switch key-press, etc.), a trackball, a mouse, and an operating lever, etc..
  • the display unit 440 is configured to display information entered by the user or information supplied to the user, and menus of the mobile phone.
  • the display unit 440 includes a display panel 441, such as a Liquid Crystal Display (LCD), or an Organic Light-Emitting Diode (OLED).
  • the display panel 441 can be covered by the touch panel 431, after touch operations are detected on or near the touch panel 431, they will be sent to the processor 480 to determine the type of the touching event. Subsequently, the processor 480 supplies the corresponding visual output to the display panel 441 according to the type of the touching event.
  • the touch panel 431 and the display panel 441 are two individual components to implement input and output of the mobile phone, but they can be integrated together to implement the input and output in some embodiments.
  • the mobile phone 400 includes at least one sensor 450, such as light sensors, motion sensors, or other sensors.
  • the light sensors includes ambient light sensors for adjusting brightness of the display panel 441 according to the ambient light, and proximity sensors for turning off the display panel 441 and/or maintaining backlight when the mobile phone is moved to the ear side.
  • Accelerometer sensor as one of the motion sensors can detect the magnitude of accelerations in every direction (Triaxial, generally), and detect the magnitude and direction of gravity in an immobile status, which is applicable to applications of identifying attitudes of the mobile (such as switching between horizontal and vertical screens, related games, magnetometer attitude calibration, etc.), vibration recognition related functions (such as pedometer, percussion, etc.).
  • the mobile phone 400 also can configure other sensors (such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.) whose detailed descriptions are omitted here.
  • the audio circuit 460, the speaker 461 and the microphone 462 supply an audio interface between the user and the mobile phone.
  • the audio data is received and converted to electrical signals by audio circuit 460, and then transmitted to the speaker 461, which are converted to sound signal to output.
  • the sound signal collected by the speaker is then converted to electrical signals which will be received and converted to audio data.
  • the audio data are output to the processor 480 to process, and then sent to another mobile phone via the RF circuit 410, or sent to the memory 420 to process further.
  • WiFi pertains to short-range wireless transmission technology providing a wireless broadband Internet, by which the mobile phone can help the user to receive and send email, browse web, and access streaming media, etc.
  • WiFi module 470 is illustrated in Fig. 4, it should be understood that, WiFi module 470 is not a necessary for the mobile phone, which can be omitted according the actual demand without changing the essence of the present disclosure.
  • the processor 480 is a control center of the mobile phone, which connects with every part of the mobile phone by various interfaces or circuits, and performs various functions and processes data by running or performing software
  • the processor 480 may include one or more processing units.
  • the processor 480 can integrate with application processors and modem processors, for example, the application processors include processing operating system, user interface and applications, etc.; the modern processors are used for performing wireless communication. It can be understood that, it's an option to integrate the modern processors to the processor 480.
  • the mobile phone 400 may include a power supply (such as battery) supplying power for each component, preferably, the power supply can connect with the processor 480 by power management system, so as to manage charging, discharging and power consuming.
  • a power supply such as battery
  • the power supply can connect with the processor 480 by power management system, so as to manage charging, discharging and power consuming.
  • the mobile phone 400 may include a camera, and a Bluetooth module, etc., which are not illustrated.
  • the processor 480 in the terminal includes the following functions.
  • the sound trigger condition above namely is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
  • the modulating information mentioned above is any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here.
  • the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
  • the present disclosure calls the audio files and the modulating information as well, then processes the audio files according to the modulating information, and finally plays them, thus the audio files can be reused in different scenes if different modulating information is configured. Due to the data quantity of the modulating information are smaller than that of the audio files obviously, thus the solution of the present disclosure can reuse the audio files in a save condition, thereby reducing the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
  • the embodiment provides a configuring solution of the modulating information is that, the processor 480 is further configured to obtain the modulating information configured synchronously in the sound trigger condition. It should be understood that, this solution can be achieved if only a corresponding relationship is existed between the sound trigger condition and modulating information, which can be associated with certain information so as to be captured by the device.
  • the configuring solution described above namely is one preferable embodiment, but not the one.
  • one of the sound trigger conditions can be art dynamic frame or a process trigger event, etc.
  • the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied.
  • the present disclosure provides solution as following: the processor 480 is concretely configured to obtain two or more audio files corresponding to the sound trigger condition.
  • the processor 480 is configured to obtain one or more modulating information corresponding to each audio file which is corresponding to the sound trigger condition.
  • each piece of modulating information includes two or more pieces of modulating sub -information which are marked with individual valid period.
  • the processor 480 is further configured to modulate the audio file corresponding to the modulating sub-information according to time order of the sub- information in their valid periods.
  • an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then the processor 480 is configured to modulate the audio file corresponding to the modulating sub-information, according to the parameter of the modulation information and the modification rule.
  • the device in the embodiment mentioned above divides into multiple units according to function logic, which is not limited however, and it is viable if only the corresponding functions can be performed.
  • the designation for every unit is just for distinguishing each other, which is not limited here.

Abstract

An audio calling method includes running a program process in a device. If a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition,the device obtains modulating information corresponding to the sound trigger condition. The device modulates the audio file according to the modulating information and plays the modulated audio file. Since the audio files and the modulating information are called,and the audio files according to the modulating information are then processed to play,thus the audio files can be reused in different scenes if different modulating information is configured.

Description

AUDIO CALLING METHOD AND DEVICE THEREOF
CROSS-REFERENCE TO RELATED APPLICATIONS
[001 ] This application claims priority to Chinese Patent Application No.
201310351463.8, filed on August 13, 2013, which is hereby incorporated by reference in its entirety.
FIELD
[002] The present disclosure relates to information technology field, and more particularly to an audio calling method and a device thereof.
BACKGROUND
[003] Audio files are often called during the application process of some software. Conventionally, such an audio calling method for the software is executed as following: configuring a single audio file or a set of multiple audio files to a software logic (so-called sound trigger condition) which is needed to sound so as to correspond with it, and therein the software logics are various such as art dynamic frames, process trigger events, and the like.
[004] In such a way, the called audio files are corresponded to the software logics, and the audio files are individual. Thus the audio files must take up much storage space, and take up a great deal of memory during the calling process which reduces the program running efficiency.
SUMMARY
[005] The present disclosure provides an audio calling method and a device thereof, configured to reuse audio files, which can reduce storage space taken up by the audio files, and reduce the taken memory thereby improving program running efficiency.
[006] An audio calling method comprises: running a program process, if a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition; modulating the audio file according to the modulating information; and playing the modulated audio file.
[007] An audio calling device includes a hardware processor and a non- transitory storage medium accessible to the processor. The non-transitory storage medium is configured to store units including: a trigger determination unit, configured to determine if a sound trigger condition being satisfied during a program process running; an obtaining unit, configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition, when the sound trigger condition is satisfied; a modulation unit, configured to modulate the audio file according to the modulating information obtained by the obtaining unit; and a playing unit, configured to play the modulated audio file.
[008] The present disclosure calls the audio files and the modulating information as well, and then processes the audio files according to the modulating information, and finally plays them, thus the audio files can be reused in different scenes if different modulating information is configured. Because the data size for storing the modulating information is smaller than that for the audio files, the solution of the present disclosure can reuse the audio files with limited additional memory. Thus, the solution reduces the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] To explain the technical solutions of the embodiments of the present disclosure, accompanying drawings used in the embodiments are followed.
Apparently, the following drawings merely illustrate some embodiments of the disclosure, but for persons skilled in the art, other drawings can be obtained without creative works according to these drawings.
[010] Fig. 1 is a flowchart of the method according to an embodiment of the present disclosure;
[011] Fig. 2 is schematic view of the levels of the method according to an embodiment of the present disclosure; [012] Fig. 3 is a block diagram of the device according to an embodiment of the present disclosure; and
[013] Fig. 4 is a block diagram of the device according to another embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE DRAWINGS
[014] Reference throughout this specification to "one embodiment," "an embodiment," "example embodiment," or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with an embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment," "in an example embodiment," or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[015] The terminology used in the description of the disclosure herein is for the purpose of describing particular examples only and is not intended to be limiting of the disclosure. As used in the description of the disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. It will also be understood that the term
"and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "may include," "including," "comprises," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, operations, elements, components, and/or groups thereof.
[016] As used herein, the term "module" or "unit" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor
(shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
[017] The exemplary environment may include a server, a client, and a communication network. The server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc. Although only one client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
[018] The communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients. For example, communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless. In a certain embodiment, the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
[019] In some cases, the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device. In various embodiments, the client may include a network access device. The client may be stationary or mobile.
[020] A server, as used herein, may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines. A server may also include one or more processors to execute computer programs in parallel.
[021 ] The solutions in the embodiments of the present disclosure are clearly and completely described in combination with the attached drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part, but not all, of the embodiments of the present disclosure. On the basis of the embodiments of the present disclosure, all other embodiments acquired by those of ordinary skill in the art under the precondition that no creative efforts have been made shall be covered by the protective scope of the present disclosure.
[022] Other aspects, features, and advantages of this disclosure will become apparent from the following detailed description when taken in conjunction with the accompanying drawings. Apparently, the embodiments described thereinafter merely a part of embodiments of the present disclosure, but not all embodiments. Persons skilled in the art can obtain all other embodiments without creative works, based on these embodiments, which pertains to the protection scope of the present disclosure.
[023] As shown in Fig. 1, the embodiment of the present disclosure provides an audio calling method including the following steps:
[024] Step 101, running a program process, if a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition.
[025] The sound trigger condition above namely is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
[026] The modulating information mentioned above is any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here. For example, the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
[027] Optionally, the embodiment provides a configuring solution of the modulating information is that, the method of obtaining modulating information corresponding to the sound trigger condition includes obtaining the modulating information configured synchronously in the sound trigger condition. It should be understood that, this solution can be achieved if only a corresponding relationship is existed between the sound trigger condition and modulating information, which can be associated with certain information so as to be captured by the device. By this token, the configuring solution described above is merely one preferable embodiment, but not the one.
[028] For example one of the sound trigger conditions can be art dynamic frames or a process trigger events, etc.. When the sound trigger condition is triggered, the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied. In view of it, the present disclosure provides solution as following: the method of obtaining the audio file corresponding to the sound trigger condition includes obtaining two or more audio files corresponding to the sound trigger condition.
[029] However, not only one sound effect may be presented when one audio file is triggered once. In view of it, to reduce the determination of the sound trigger condition and the times of capturing the modulating information, the present disclosure provides a quick capture method below: each audio file corresponding to the sound trigger condition has one or more modulating information accordingly.
[030] Step 102, modulating the audio file according to the modulating information.
[031 ] Furthermore, when the sound trigger condition is satisfied, one or more audio file(s) will be called in one time, and each piece of modulating information includes two or more pieces of modulating sub -information which are marked with individual valid period. Based on this processing method in a complex sound effect status, the present disclosure can show a complex and exquisite sound effect, which is applicable to scenes requiring high-quality sound effect. The solution is that:
[032] The method of modulating the audio file according to the modulating information includes modulating the audio file corresponding to the modulating sub- information according to time order of the sub-information in their valid periods.
[033] To obtain a complex sound effect, an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then method of modulating the audio file according to the modulating information includes modulating the audio file corresponding to the modulating sub-information, according to the parameter of the modulation
information and the modification rule.
[034] The modification rule may include rules that change the parameter according to the game scenario in which the audio file is played. For example, a modification rule may include a trigger condition when the audio file is played. In an action game, an audio file package corresponding to a specific action may include a few audio files that may be randomly selected when the player performs the action. When the player performs the action repeatedly, there is no need to play the ending portion of the audio file. In this case, the trigger condition may force the previously audio file to stop abruptly and completely.
[035] The audio file package may further include a second audio file that simulates the player shouting when the player performs the action. For example, the player may shout "Ah" when the player uses a sword to fight the enemy character, which may be randomly played if certain condition is met. Accordingly, the modification rule may include a mute condition that instructs the terminal device not to play the second audio file.
[036] The audio file package may include two types of audio files. The first type of audio files relates to an input from the player while the second type of audio files is not related to the input from the player. For example, the first type of audio files may relate to hitting, running, getting hit, receiving a punch, or other action performed or received by the player. The second types of audio files may relate to background music or other audio played in the background. Depending on the type of audio file, the modification rule may include a priority parameter that instructs the terminal device to extract the first type of audio files into the memory directly.
[037] Generally, the first type of audio files may require a timely response from the terminal device because the corresponding actions are time sensitive. The second type of audio files may allow some delay for the terminal device to respond because they are not time sensitive. Thus, the terminal device may extract the first type of audio files directly to the memory. The terminal device may stream the second type of audio files because the music and ambient sounds are usually played in the background and may be delayed without affect gaming experience. [038] Step 103, playing the modulated audio file.
[039] In the embodiment, the audio file and the modulating information are called, and the audio file will be played after processed according to the
[040] modulating information. Thus the audio files can be reused in different scenes if different modulating information is configured. Due to the data quantity of the modulating information are smaller than that of the audio files obviously, thus the solution of the present disclosure can reuse the audio files in a save condition, thereby reducing the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
[041 ] The following embodiments are some examples that the present disclosure is used in game software. Generally, game software is served as audio application, which is software with high-quality audio and complex sound effect. However, it' s well known to persons skilled in the art that, the embodiments of the present disclosure are applicable to any software which is needed to call audio, but not to be limited to this game software.
[042] The solution of this embodiment is: binding the sound trigger condition together with modulating information accordingly; modulating the sound according to the modulating information when the sound achieves the trigger condition; and playing the modulated sound.
[043] As shown in Fig. 2, three levels are divided according to storage and usage of information or files. The first level is the storage level including audio files, modulating information and trigger conditions. The second level is the judgment and process logic for determining if the trigger condition is achieved and then modulating and playing the audio files. The third level accomplishes playing the modulated audio files.
[044] Following is an example that art graphics frame playing in the game software is served as the sound trigger condition so as to perform sound
synchronization playing. Assuming a graphics frame has three corresponding sound scripts (which have corresponding audio files and modulation parameters). The process is that, when the art graphics is played, three sound scripts bound to the first art graphics frame are triggered, and then regulating the sound effect of the sound to be played according to the modulating information.
[045] These three sound scripts can be the same audio file or the mapping of the audio file, or different audio files, so that the sound is not counted as a unit with a file or a called mapping, but a unit with a binding relationship. When the trigger condition is achieved to play, a temporary program is established, and then modulating information which is configured when binding is processed.
[046] If the three bound sound scripts pertain to the same audio file or a mapping of the audio file, the actual playing effect of this frame is depended on the effect modulated by the modulating information, by which the paying effects of the three sound scripts can be changed differently.
[047] By this token, only one audio file is needed in the embodiment, but three audio files are needed in the conventional method, thus the storage space taken up by the present disclosure is reduced to one third of that taken up by the conventional.
[048] The present disclosure also provides an audio calling device 300, as shown in Fig. 3. The device 300 includes a hardware processor 310 and a non-transitory storage medium 320 accessible to the hardware processor 310. The a non-transitory storage medium 320 is configured to store at least the following units:
[049] A trigger determination unit 301, configured to determine if a sound trigger condition is satisfied during a running program process.
[050] The sound trigger condition is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
[051 ] An obtaining unit 302, configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition.
[052] The modulating information mentioned above can be any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here. For example, the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
[053] A modulation unit 303, configured to modulate and processing the audio file according to the modulating information obtained by the obtaining unit 302.
[054] A playing unit 304, configured to play the modulated audio file and processed by the modulation unit 303.
[055] Optionally, the embodiment provides a configuring solution of the modulating information is that, the obtaining unit 302 is configured to obtain the modulating information configured synchronously in the sound trigger condition.
[056] It should be understood that, this solution can be achieved if only a corresponding relationship is existed between the sound trigger condition and modulating information, which can be associated with certain information so as to be captured by the device. By this token, the configuring solution described above merely is one preferable embodiment, but not the one.
[057] For example one of the sound trigger conditions can be art dynamic frames or a process trigger events, etc.. When the sound trigger condition is triggered, the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied. In view of it, the present disclosure provides solution as following: the obtaining unit 302 is used for obtaining two or more audio files corresponding to the sound trigger condition.
[058] However, not only one sound effect will be presented when one audio file is triggered once. In view of it, to reduce the determination of the sound trigger condition and the times of capturing the modulating information, the present disclosure provides a quick capture manner below: the obtaining unit 302 is further used for obtaining one or more modulating information corresponded to each audio file which is corresponding to the sound trigger condition.
[059] Furthermore, when the sound trigger condition is satisfied, one or more audio file(s) will be called in one time, and each piece of modulating information obtained by the obtaining unit 302 includes two or more pieces of modulating sub- information which are marked with individual valid period. Based on this processing method in a complex sound effect status, the present disclosure can show a complex and exquisite sound effect, which is applicable to scenes requiring high-quality sound effect. The solution is that:
[060] The modulation unit 303 is used for modulating the audio file
corresponding to the modulating sub-information according to time order of the sub- information in their valid periods.
[061 ] To obtain a complex sound effect, an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then the modulation unit 303 is configured to modulate and processing the audio file corresponding to the modulating sub-information, according to the parameter of the modulation information and the modification rule.
[062] The present disclosure also provides an audio calling device according to another embodiment, as shown in Fig. 4. To simplify the expatiation, only some relevant portions associated with the present embodiment are illustrated, other details un-displayed can be reviewed in the method described above. For example the device may be any terminal device such as a mobile phone, a Tablet PC, a PDA (Personal Digital Assistant), a POS (Point of Sales), a car PC, or any computing device having a processor. Following is an example of a mobile phone.
[063] Fig. 4 is a block diagram of a partial device related to a mobile phone. For example the mobile phone includes a radio frequency (RF) circuit 410, a memory 420, an input unit 430, a display unit 440, a sensor 450, an audio circuit 460, a wireless fidelity (WiFi) module 470, a processor 480, and a power 490, etc.. It's understood for persons skilled in the art that, the structure of a mobile phone illustrated in Fig. 4 is not limited, some components can be added or omitted, or some combinations or arrangement can be included.
[064] Following is a detailed description of the structure of the mobile phone by combining with Fig. 4.
[065] The RF circuit 410 is configured to receive and sending signals during calling or process of receiving and sending message. Specially, the RF circuit 410 will receive downlink information from the base station and send it to the processor 480; or send uplink data to the base station. Generally, the RF circuit 410 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a diplexer, and the like. In addition, the RF circuit 40 can
communicate with network or other devices by wireless communication. Such wireless communication can use any one communication standard or protocol, which includes, but is not limited to, Global System of Mobile communication (GSM), (General Packet Radio Service, GPRS), (Code Division Multiple Access, CDMA), (Wideband Code Division Multiple Access, WCDMA), (Long Term Evolution, LTE), email, or (Short Messaging Service, SMS).
[066] The memory 420 is configured to store software program and module which will be run by the processor 480, so as to perform multiple functional applications of the mobile phone and data processing. The memory 430 mainly includes storing program area and storing data area. Concretely, the storing program area can store the operating system, at least one application program with required function (such as sound playing function, image playing function, etc.). The storing data area can store data established by mobile phone according to actual using demand (such as audio data, phonebook, etc.) Furthermore, the memory 420 can be high-speed random access memory, or nonvolatile memory, such as disk storage, flash memory device, or other volatile solid-state memory storage devices.
[067] The input unit 430 is configured to receive the entered number or character information, and the entered key signal related to user setting and function control of the mobile phone 400. Concretely, the input unit 430 includes a touch panel 431 or other input devices 432. The touch panel 431 is called as touch screen, which can collect user's touch operations thereon or nearby (for example the operations generated by fingers of user or stylus pen, and the like, touching on the touch panel 431 or touching near the touch panel 431), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 431 includes two portions including a touch detection device and a touch controller. For example the touch detection device is configured to detect touch position of the user and detecting signals accordingly, and then sending the signals to the touch controller. Subsequently, the touch controller receives touch information from the touch detection device, and converts it to contact coordinates which are to be sent to the processor 480, and then receives command sent by the processor 480 to perform. In addition, besides the touch panel 431, the input unit 430 can include, but is not limited to, other input devices 432, such as one or more selected from physical keyboard, function keys (such as volume control keys, switch key-press, etc.), a trackball, a mouse, and an operating lever, etc..
[068] The display unit 440 is configured to display information entered by the user or information supplied to the user, and menus of the mobile phone. For example, the display unit 440 includes a display panel 441, such as a Liquid Crystal Display (LCD), or an Organic Light-Emitting Diode (OLED). Furthermore, the display panel 441 can be covered by the touch panel 431, after touch operations are detected on or near the touch panel 431, they will be sent to the processor 480 to determine the type of the touching event. Subsequently, the processor 480 supplies the corresponding visual output to the display panel 441 according to the type of the touching event. As shown in Fig. 4, the touch panel 431 and the display panel 441 are two individual components to implement input and output of the mobile phone, but they can be integrated together to implement the input and output in some embodiments.
[069] Furthermore, the mobile phone 400 includes at least one sensor 450, such as light sensors, motion sensors, or other sensors. For example the light sensors includes ambient light sensors for adjusting brightness of the display panel 441 according to the ambient light, and proximity sensors for turning off the display panel 441 and/or maintaining backlight when the mobile phone is moved to the ear side. Accelerometer sensor as one of the motion sensors can detect the magnitude of accelerations in every direction (Triaxial, generally), and detect the magnitude and direction of gravity in an immobile status, which is applicable to applications of identifying attitudes of the mobile (such as switching between horizontal and vertical screens, related games, magnetometer attitude calibration, etc.), vibration recognition related functions (such as pedometer, percussion, etc.). And the mobile phone 400 also can configure other sensors (such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.) whose detailed descriptions are omitted here. [070] The audio circuit 460, the speaker 461 and the microphone 462 supply an audio interface between the user and the mobile phone. For example the audio data is received and converted to electrical signals by audio circuit 460, and then transmitted to the speaker 461, which are converted to sound signal to output. On the other hand, the sound signal collected by the speaker is then converted to electrical signals which will be received and converted to audio data. Subsequently, the audio data are output to the processor 480 to process, and then sent to another mobile phone via the RF circuit 410, or sent to the memory 420 to process further.
[071 ] WiFi pertains to short-range wireless transmission technology providing a wireless broadband Internet, by which the mobile phone can help the user to receive and send email, browse web, and access streaming media, etc.. Although the WiFi module 470 is illustrated in Fig. 4, it should be understood that, WiFi module 470 is not a necessary for the mobile phone, which can be omitted according the actual demand without changing the essence of the present disclosure.
[072] The processor 480 is a control center of the mobile phone, which connects with every part of the mobile phone by various interfaces or circuits, and performs various functions and processes data by running or performing software
program/module stored in the memory 420 or calling data stored in the memory 420, so as to monitor the mobile phone. Optionally, the processor 480 may include one or more processing units. Preferably, the processor 480 can integrate with application processors and modem processors, for example, the application processors include processing operating system, user interface and applications, etc.; the modern processors are used for performing wireless communication. It can be understood that, it's an option to integrate the modern processors to the processor 480.
[073] Furthermore, the mobile phone 400 may include a power supply (such as battery) supplying power for each component, preferably, the power supply can connect with the processor 480 by power management system, so as to manage charging, discharging and power consuming.
[074] In addition, the mobile phone 400 may include a camera, and a Bluetooth module, etc., which are not illustrated. [075] In this embodiment, the processor 480 in the terminal includes the following functions.
[076] Running the program process, if a sound trigger condition is satisfied, then obtaining an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition; modulating the audio file according to the modulating information; and playing the modulated audio file.
[077] The sound trigger condition above namely is a software logic to be sounded, and such software logic is various such as art dynamic frame, process trigger event, and the like, which is not limited in the embodiments of the present disclosure.
[078] The modulating information mentioned above is any parameter to be regulated to the audio files, such as volume control parameter, track control parameter, audio mixing control parameter, and the like. And persons skilled in the art can determine the detailed content of the modulating information according to the modulating demand, which is not limited here. For example, the parameter of the modulating information can be a or several fixed value(s), also can be a range; and a rule corresponding to the parameter when the parameter is several fixed values or a range will be described specifically thereinafter.
[079] In such a way, since the present disclosure calls the audio files and the modulating information as well, then processes the audio files according to the modulating information, and finally plays them, thus the audio files can be reused in different scenes if different modulating information is configured. Due to the data quantity of the modulating information are smaller than that of the audio files obviously, thus the solution of the present disclosure can reuse the audio files in a save condition, thereby reducing the storage space taken up by the audio files, and reducing the memory which is to be taken up so as to improve program running efficiency.
[080] Optionally, the embodiment provides a configuring solution of the modulating information is that, the processor 480 is further configured to obtain the modulating information configured synchronously in the sound trigger condition. It should be understood that, this solution can be achieved if only a corresponding relationship is existed between the sound trigger condition and modulating information, which can be associated with certain information so as to be captured by the device. By this token, the configuring solution described above namely is one preferable embodiment, but not the one.
[081 ] For example one of the sound trigger conditions can be art dynamic frame or a process trigger event, etc.. When the sound trigger condition is triggered, the amount of the audio to be called may be two or more sometimes, for example, an action of about-turn for a man in a game is happened with a breath sound and a friction sound of clothing accompanied. In view of it, the present disclosure provides solution as following: the processor 480 is concretely configured to obtain two or more audio files corresponding to the sound trigger condition.
[082] However, not only one sound effect may be presented when one audio file is triggered once. In view of it, to reduce the determination of the sound trigger condition and the times of capturing the modulating information, the present disclosure provides a quick capture manner below: the processor 480 is configured to obtain one or more modulating information corresponding to each audio file which is corresponding to the sound trigger condition.
[083] Furthermore, when the sound trigger condition is satisfied, one or more audio file(s) will be called in one time, and each piece of modulating information includes two or more pieces of modulating sub -information which are marked with individual valid period. Based on this processing method in a complex sound effect status, the present disclosure can show a complex and exquisite sound effect, which is applicable to scenes requiring high-quality sound effect. The solution is that:
[084] The processor 480 is further configured to modulate the audio file corresponding to the modulating sub-information according to time order of the sub- information in their valid periods.
[085] To obtain a complex sound effect, an optional embodiment is performed without a number of modulating information. If each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, then the processor 480 is configured to modulate the audio file corresponding to the modulating sub-information, according to the parameter of the modulation information and the modification rule.
[086] It should be noticed that, the device in the embodiment mentioned above divides into multiple units according to function logic, which is not limited however, and it is viable if only the corresponding functions can be performed. In addition, the designation for every unit is just for distinguishing each other, which is not limited here.
[087] Moreover, it' s understood for person skilled in the art to accomplish part of or whole steps in the embodiment mentioned above by instructing the related hardware with program. Such program can be stored in a computer-readable storage medium such as read-only memory, magnetic or optical disk, etc..
[088] While the disclosure has been described in connection with what are presently considered to be the most practical and preferred embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the disclosure.

Claims

Claims What is claimed is:
1. A method for playing audio, comprising:
running, by a computing device, a program process, if a sound trigger condition being satisfied then obtaining an audio file corresponding to the sound trigger condition;
obtaining, by the computing device, modulating information corresponding to the sound trigger condition;
modulating, by the computing device, the audio file according to the modulating information; and
playing, by the computing device, the modulated audio file.
2. The method according to claim 1, wherein obtaining modulating information corresponding to the sound trigger condition comprises:
obtaining the modulating information configured synchronously in the sound trigger condition.
3. The method according to claim 1, wherein obtaining the audio file corresponding to the sound trigger condition comprises:
obtaining a plurality of audio files corresponding to the sound trigger condition.
4. The method according to claim 1, wherein each audio file corresponding to the sound trigger condition comprises one or more piece(s) of modulating information accordingly.
5. The method according to any one of claims 1 to 4, wherein each piece of modulating information comprises two or more piece(s) of modulating sub- information marked with individual valid period; and wherein modulating the audio file according to the modulating information comprises: modulating the audio file corresponding to the modulating sub- information according to time order of the sub-information in their valid periods.
6. The method according to any one of claims 1 to 3, wherein if each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, modulating the audio file according to the modulating information comprises:
modulating the audio file corresponding to the modulating sub-information, according to the parameter of the modulation information and the modification rule.
7. An audio calling device, comprising a hardware processor and a non-transitory storage medium accessible to the hardware processor, the non-transitory storage medium configured to store units comprising:
a trigger determination unit, configured to determine if a sound trigger condition being satisfied during a program process running;
an obtaining unit, configured to obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition, when the sound trigger condition is satisfied;
a modulation unit, configured to modulate the audio file according to the modulating information obtained by the obtaining unit; and
a playing unit, configured to play the modulated audio file.
8. The device according to claim 7, wherein the obtaining unit is further configured to obtain the modulating information configured synchronously in the sound trigger condition.
9. The device according to claim 7, wherein the obtaining unit is further configured to obtain a plurality of audio files corresponding to the sound trigger condition.
10. The device according to claim 7, wherein the obtaining unit is further configured to obtain one or more piece(s) of modulating information corresponding to each audio file which is corresponding to the sound trigger condition.
11. The device according to any one of claim 7 to claim 10, wherein each piece of modulating information comprises two or more piece(s) of modulating sub- information marked with individual valid period, the modulation unit is further configured to modulate the audio file corresponding to the modulating sub- information according to time order of the sub-information in their valid periods.
12. The device according to any one of claim 7 to claim 9, wherein if each piece of modulating information includes a range of a parameter of the modulating information and a modification rule to change the parameter, the modulation unit is further configured to modulate the audio file corresponding to the modulating sub- information, according to the parameter of the modulation information and the modification rule.
13. An audio calling device, comprising a hardware processor and a non-transitory storage medium accessible to the hardware processor, the device is configured to: determine if a sound trigger condition being satisfied during a program process running;
obtain an audio file corresponding to the sound trigger condition, and obtaining modulating information corresponding to the sound trigger condition, when the sound trigger condition is satisfied;
modulate the audio file according to the modulating information obtained by the obtaining unit; and
a playing unit, configured to play the modulated audio file.
14. The device according to claim 7, further configured to obtain the modulating information configured synchronously in the sound trigger condition.
15. The device according to claim 13, further configured to obtain a plurality of audio files corresponding to the sound trigger condition.
16. The device according to claim 13, further configured to obtain one or more piece(s) of modulating information corresponding to each audio file which is corresponding to the sound trigger condition.
17. The device according to any one of claims 13 to 16, wherein each piece of modulating information comprises two or more piece(s) of modulating sub- information marked with individual valid period, the device is further configured to modulate the audio file corresponding to the modulating sub-information according to time order of the sub -information in their valid periods.
18. The device according to any one of claims 13 to 15, wherein if each piece of modulating information includes a range of a parameter of the modulating
information and a modification rule to change the parameter, the device is further configured to modulate the audio file corresponding to the modulating sub- information, according to the parameter of the modulation information and the modification rule.
PCT/CN2014/078648 2013-08-13 2014-05-28 Audio calling method and device thereof WO2015021805A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310351463.8 2013-08-13
CN201310351463.8A CN104375799A (en) 2013-08-13 2013-08-13 Audio invoking method and audio invoking device

Publications (1)

Publication Number Publication Date
WO2015021805A1 true WO2015021805A1 (en) 2015-02-19

Family

ID=52468001

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078648 WO2015021805A1 (en) 2013-08-13 2014-05-28 Audio calling method and device thereof

Country Status (2)

Country Link
CN (1) CN104375799A (en)
WO (1) WO2015021805A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633067A (en) * 2016-06-16 2019-12-31 Oppo广东移动通信有限公司 Sound effect parameter adjusting method and mobile terminal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959481B (en) 2016-06-16 2019-04-30 Oppo广东移动通信有限公司 A kind of control method and electronic equipment of scene audio

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831772A (en) * 2005-03-10 2006-09-13 祥硕科技股份有限公司 Computer system and method for broadcasting acoustic when opening a terminal
US20060274905A1 (en) * 2005-06-03 2006-12-07 Apple Computer, Inc. Techniques for presenting sound effects on a portable media player
US20070006708A1 (en) * 2003-09-09 2007-01-11 Igt Gaming device which dynamically modifies background music based on play session events
US20080075296A1 (en) * 2006-09-11 2008-03-27 Apple Computer, Inc. Intelligent audio mixing among media playback and at least one other non-playback application
CN101196924A (en) * 2007-12-28 2008-06-11 腾讯科技(深圳)有限公司 Audio document calling method and system
US20080161100A1 (en) * 2002-09-16 2008-07-03 Igt Method and apparatus for player stimulation
US20080242411A1 (en) * 2007-04-02 2008-10-02 Aristocrat Technologies Australia Pty, Ltd Gaming machine with sound effects
CN102143259A (en) * 2010-01-28 2011-08-03 骅讯电子企业股份有限公司 Method for providing background sounds for communication device and applied communication system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100724837B1 (en) * 2003-08-25 2007-06-04 엘지전자 주식회사 Method for managing audio level information and method for controlling audio output level in digital audio device
CN100479055C (en) * 2006-04-11 2009-04-15 北京金山软件有限公司 Audio playing method and system in game of mobile phone
CN100556101C (en) * 2007-01-26 2009-10-28 深圳创维-Rgb电子有限公司 A kind of television voice processing apparatus and method
CN101697644A (en) * 2009-10-29 2010-04-21 青岛海信移动通信技术股份有限公司 Mixed sound output method and mixed sound output related device of mobile terminal
CN103209370A (en) * 2012-01-16 2013-07-17 联想(北京)有限公司 Electronic equipment and method for adjusting file sound parameters output by sound playing device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080161100A1 (en) * 2002-09-16 2008-07-03 Igt Method and apparatus for player stimulation
US20070006708A1 (en) * 2003-09-09 2007-01-11 Igt Gaming device which dynamically modifies background music based on play session events
CN1831772A (en) * 2005-03-10 2006-09-13 祥硕科技股份有限公司 Computer system and method for broadcasting acoustic when opening a terminal
US20060274905A1 (en) * 2005-06-03 2006-12-07 Apple Computer, Inc. Techniques for presenting sound effects on a portable media player
US20080075296A1 (en) * 2006-09-11 2008-03-27 Apple Computer, Inc. Intelligent audio mixing among media playback and at least one other non-playback application
US20080242411A1 (en) * 2007-04-02 2008-10-02 Aristocrat Technologies Australia Pty, Ltd Gaming machine with sound effects
CN101196924A (en) * 2007-12-28 2008-06-11 腾讯科技(深圳)有限公司 Audio document calling method and system
CN102143259A (en) * 2010-01-28 2011-08-03 骅讯电子企业股份有限公司 Method for providing background sounds for communication device and applied communication system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633067A (en) * 2016-06-16 2019-12-31 Oppo广东移动通信有限公司 Sound effect parameter adjusting method and mobile terminal
CN110633067B (en) * 2016-06-16 2023-02-28 Oppo广东移动通信有限公司 Sound effect parameter adjusting method and mobile terminal

Also Published As

Publication number Publication date
CN104375799A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
US11355157B2 (en) Special effect synchronization method and apparatus, and mobile terminal
US20160323610A1 (en) Method and apparatus for live broadcast of streaming media
US20160066119A1 (en) Sound effect processing method and device thereof
CN107544842B (en) Applied program processing method and device, computer equipment, storage medium
US10891102B2 (en) Scene sound effect control method, and electronic device
CN106303733B (en) Method and device for playing live special effect information
US20170147187A1 (en) To-be-shared interface processing method, and terminal
CN109379247A (en) The method and device that the network delay of a kind of pair of application program is detected
US10657347B2 (en) Method for capturing fingerprint and associated products
US20150043312A1 (en) Sound playing method and device thereof
CN111686447B (en) Method and related device for processing data in virtual scene
US20150157943A1 (en) Method, system and computer storage medium for handling of account theft in online games
US9965733B2 (en) Method, apparatus, and communication system for updating user data based on a completion status of a combination of business task and conversation task
CN107908765B (en) Game resource processing method, mobile terminal and server
CN103530520A (en) Method and terminal for obtaining data
US20150229708A1 (en) Method, apparatus, and system for controlling voice data transmission
US20150089662A1 (en) Method and system for identifying file security and storage medium
US20190354383A1 (en) Method and device for sound effect processing
CN107491349B (en) Applied program processing method and device, computer equipment, storage medium
CN110339561A (en) A kind of shooting game optimization method, terminal and computer readable storage medium
WO2015135457A1 (en) Method, apparatus, and system for sending and playing multimedia information
WO2015021805A1 (en) Audio calling method and device thereof
WO2015184959A2 (en) Method and apparatus for playing behavior event
CN108920086B (en) Split screen quitting method and device, storage medium and electronic equipment
CN113821142B (en) Interface window management method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14835944

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 180716)

122 Ep: pct application non-entry in european phase

Ref document number: 14835944

Country of ref document: EP

Kind code of ref document: A1