AudioTrack
  public
  
  
  
  class
  AudioTrack
  
    extends Object
  
  
  
  
  
      implements
      
        AudioRouting, 
      
        VolumeAutomation
      
  
  
| java.lang.Object | |
| ↳ | android.media.AudioTrack | 
The AudioTrack class manages and plays a single audio resource for Java applications.
 It allows streaming of PCM audio buffers to the audio sink for playback. This is
 achieved by "pushing" the data to the AudioTrack object using one of the
  write(byte[], int, int), write(short[], int, int),
  and write(float[], int, int, int) methods.
 
An AudioTrack instance can operate under two modes: static or streaming.
 In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using
 one of the write() methods. These are blocking and return when the data has been
 transferred from the Java layer to the native layer and queued for playback. The streaming
 mode is most useful when playing blocks of audio data that for instance are:
 
- too big to fit in memory because of the duration of the sound to play,
- too big to fit in memory because of the characteristics of the audio data (high sampling rate, bits per sample ...)
- received or generated while previously queued audio is playing.
Upon creation, an AudioTrack object initializes its associated audio buffer.
 The size of this buffer, specified during the construction, determines how long an AudioTrack
 can play before running out of data.
 For an AudioTrack using the static mode, this size is the maximum size of the sound that can
 be played from it.
 For the streaming mode, data will be written to the audio sink in chunks of
 sizes less than or equal to the total buffer size.
 AudioTrack is not final and thus permits subclasses, but such use is not recommended.
Summary
| Nested classes | |
|---|---|
| 
        
        
        
        
        class | AudioTrack.BuilderBuilder class for  | 
| 
        
        
        
        
        class | AudioTrack.MetricsConstants
 | 
| 
        
        
        
        
        interface | AudioTrack.OnCodecFormatChangedListenerInterface definition for a listener for codec format changes. | 
| 
        
        
        
        
        interface | AudioTrack.OnPlaybackPositionUpdateListenerInterface definition for a callback to be invoked when the playback head position of an AudioTrack has reached a notification marker or has increased by a certain period. | 
| 
        
        
        
        
        interface | AudioTrack.OnRoutingChangedListener
      This interface was deprecated
      in API level 24.
    users should switch to the general purpose
              | 
| 
        
        
        
        
        class | AudioTrack.StreamEventCallbackAbstract class to receive event notifications about the stream playback in offloaded mode. | 
| Constants | |
|---|---|
| int | DUAL_MONO_MODE_LLThis mode indicates that a stereo stream should be presented with the left audio channel replicated into the right audio channel. | 
| int | DUAL_MONO_MODE_LRThis mode indicates that a stereo stream should be presented with the left and right audio channels blended together and delivered to both channels. | 
| int | DUAL_MONO_MODE_OFFThis mode disables any Dual Mono presentation effect. | 
| int | DUAL_MONO_MODE_RRThis mode indicates that a stereo stream should be presented with the right audio channel replicated into the left audio channel. | 
| int | ENCAPSULATION_METADATA_TYPE_DVB_AD_DESCRIPTOREncapsulation metadata type for DVB AD descriptor. | 
| int | ENCAPSULATION_METADATA_TYPE_FRAMEWORK_TUNEREncapsulation metadata type for framework tuner information. | 
| int | ENCAPSULATION_METADATA_TYPE_SUPPLEMENTARY_AUDIO_PLACEMENTEncapsulation metadata type for placement of supplementary audio. | 
| int | ENCAPSULATION_MODE_ELEMENTARY_STREAMThis mode indicates metadata encapsulation with an elementary stream payload. | 
| int | ENCAPSULATION_MODE_NONEThis mode indicates no metadata encapsulation,
 which is the default mode for sending audio data
 through  | 
| int | ERRORDenotes a generic operation failure. | 
| int | ERROR_BAD_VALUEDenotes a failure due to the use of an invalid value. | 
| int | ERROR_DEAD_OBJECTAn error code indicating that the object reporting it is no longer valid and needs to be recreated. | 
| int | ERROR_INVALID_OPERATIONDenotes a failure due to the improper use of a method. | 
| int | MODE_STATICCreation mode where audio data is transferred from Java to the native layer only once before the audio starts playing. | 
| int | MODE_STREAMCreation mode where audio data is streamed from Java to the native layer as the audio is playing. | 
| int | PERFORMANCE_MODE_LOW_LATENCYLow latency performance mode for an  | 
| int | PERFORMANCE_MODE_NONEDefault performance mode for an  | 
| int | PERFORMANCE_MODE_POWER_SAVINGPower saving performance mode for an  | 
| int | PLAYSTATE_PAUSEDindicates AudioTrack state is paused | 
| int | PLAYSTATE_PLAYINGindicates AudioTrack state is playing | 
| int | PLAYSTATE_STOPPEDindicates AudioTrack state is stopped | 
| int | STATE_INITIALIZEDState of an AudioTrack that is ready to be used. | 
| int | STATE_NO_STATIC_DATAState of a successfully initialized AudioTrack that uses static data, but that hasn't received that data yet. | 
| int | STATE_UNINITIALIZEDState of an AudioTrack that was not successfully initialized upon creation. | 
| int | SUCCESSDenotes a successful operation. | 
| int | SUPPLEMENTARY_AUDIO_PLACEMENT_LEFTSupplementary audio placement left. | 
| int | SUPPLEMENTARY_AUDIO_PLACEMENT_NORMALSupplementary audio placement normal. | 
| int | SUPPLEMENTARY_AUDIO_PLACEMENT_RIGHTSupplementary audio placement right. | 
| int | WRITE_BLOCKINGThe write mode indicating the write operation will block until all data has been written,
 to be used as the actual value of the writeMode parameter in
  | 
| int | WRITE_NON_BLOCKINGThe write mode indicating the write operation will return immediately after
 queuing as much audio data for playback as possible without blocking,
 to be used as the actual value of the writeMode parameter in
  | 
| Public constructors | |
|---|---|
| 
      AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes, int mode, int sessionId)
      Class constructor with  | |
| 
      AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode)
      
      This constructor is deprecated.
    use  | |
| 
      AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode, int sessionId)
      
      This constructor is deprecated.
    use  | |
| Public methods | |
|---|---|
| 
        
        
        
        
        
        void | 
      addOnCodecFormatChangedListener(Executor executor, AudioTrack.OnCodecFormatChangedListener listener)
      Adds an  | 
| 
        
        
        
        
        
        void | 
      addOnRoutingChangedListener(AudioTrack.OnRoutingChangedListener listener, Handler handler)
      
      This method was deprecated
      in API level 24.
    users should switch to the general purpose
              | 
| 
        
        
        
        
        
        void | 
      addOnRoutingChangedListener(AudioRouting.OnRoutingChangedListener listener, Handler handler)
      Adds an  | 
| 
        
        
        
        
        
        int | 
      attachAuxEffect(int effectId)
      Attaches an auxiliary effect to the audio track. | 
| 
        
        
        
        
        
        VolumeShaper | 
      createVolumeShaper(VolumeShaper.Configuration configuration)
      Returns a  | 
| 
        
        
        
        
        
        void | 
      flush()
      Flushes the audio data currently queued for playback. | 
| 
        
        
        
        
        
        AudioAttributes | 
      getAudioAttributes()
      Returns the  | 
| 
        
        
        
        
        
        float | 
      getAudioDescriptionMixLeveldB()
      Returns the Audio Description mix level in dB. | 
| 
        
        
        
        
        
        int | 
      getAudioFormat()
      Returns the configured audio data encoding. | 
| 
        
        
        
        
        
        int | 
      getAudioSessionId()
      Returns the audio session ID. | 
| 
        
        
        
        
        
        int | 
      getBufferCapacityInFrames()
      Returns the maximum size of the  | 
| 
        
        
        
        
        
        int | 
      getBufferSizeInFrames()
      Returns the effective size of the  | 
| 
        
        
        
        
        
        int | 
      getChannelConfiguration()
      Returns the configured channel position mask. | 
| 
        
        
        
        
        
        int | 
      getChannelCount()
      Returns the configured number of channels. | 
| 
        
        
        
        
        
        int | 
      getDualMonoMode()
      Returns the Dual Mono mode presentation setting. | 
| 
        
        
        
        
        
        AudioFormat | 
      getFormat()
      Returns the configured  | 
| 
        
        
        
        
        
        LogSessionId | 
      getLogSessionId()
      Returns the  | 
| 
        
        
        static
        
        
        float | 
      getMaxVolume()
      Returns the maximum gain value, which is greater than or equal to 1.0. | 
| 
        
        
        
        
        
        PersistableBundle | 
      getMetrics()
      Return Metrics data about the current AudioTrack instance. | 
| 
        
        
        static
        
        
        int | 
      getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)
      Returns the estimated minimum buffer size required for an AudioTrack
 object to be created in the  | 
| 
        
        
        static
        
        
        float | 
      getMinVolume()
      Returns the minimum gain value, which is the constant 0.0. | 
| 
        
        
        static
        
        
        int | 
      getNativeOutputSampleRate(int streamType)
      Returns the output sample rate in Hz for the specified stream type. | 
| 
        
        
        
        
        
        int | 
      getNotificationMarkerPosition()
      Returns marker position expressed in frames. | 
| 
        
        
        
        
        
        int | 
      getOffloadDelay()
      Return the decoder delay of an offloaded track, expressed in frames, previously set with
  | 
| 
        
        
        
        
        
        int | 
      getOffloadPadding()
      Return the decoder padding of an offloaded track, expressed in frames, previously set with
  | 
| 
        
        
        
        
        
        int | 
      getPerformanceMode()
      Returns the current performance mode of the  | 
| 
        
        
        
        
        
        int | 
      getPlayState()
      Returns the playback state of the AudioTrack instance. | 
| 
        
        
        
        
        
        int | 
      getPlaybackHeadPosition()
      Returns the playback head position expressed in frames. | 
| 
        
        
        
        
        
        PlaybackParams | 
      getPlaybackParams()
      Returns the current playback parameters. | 
| 
        
        
        
        
        
        int | 
      getPlaybackRate()
      Returns the current playback sample rate rate in Hz. | 
| 
        
        
        
        
        
        int | 
      getPositionNotificationPeriod()
      Returns the notification update period expressed in frames. | 
| 
        
        
        
        
        
        AudioDeviceInfo | 
      getPreferredDevice()
      Returns the selected output specified by  | 
| 
        
        
        
        
        
        AudioDeviceInfo | 
      getRoutedDevice()
      Returns an  | 
| 
        
        
        
        
        
        List<AudioDeviceInfo> | 
      getRoutedDevices()
      Returns a List of  | 
| 
        
        
        
        
        
        int | 
      getSampleRate()
      Returns the configured audio source sample rate in Hz. | 
| 
        
        
        
        
        
        int | 
      getStartThresholdInFrames()
      Returns the streaming start threshold of the  | 
| 
        
        
        
        
        
        int | 
      getState()
      Returns the state of the AudioTrack instance. | 
| 
        
        
        
        
        
        int | 
      getStreamType()
      Returns the volume stream type of this AudioTrack. | 
| 
        
        
        
        
        
        boolean | 
      getTimestamp(AudioTimestamp timestamp)
      Poll for a timestamp on demand. | 
| 
        
        
        
        
        
        int | 
      getUnderrunCount()
      Returns the number of underrun occurrences in the application-level write buffer since the AudioTrack was created. | 
| 
        
        
        static
        
        
        boolean | 
      isDirectPlaybackSupported(AudioFormat format, AudioAttributes attributes)
      
      This method was deprecated
      in API level 33.
    Use  | 
| 
        
        
        
        
        
        boolean | 
      isOffloadedPlayback()
      Returns whether the track was built with  | 
| 
        
        
        
        
        
        void | 
      pause()
      Pauses the playback of the audio data. | 
| 
        
        
        
        
        
        void | 
      play()
      Starts playing an AudioTrack. | 
| 
        
        
        
        
        
        void | 
      registerStreamEventCallback(Executor executor, AudioTrack.StreamEventCallback eventCallback)
      Registers a callback for the notification of stream events. | 
| 
        
        
        
        
        
        void | 
      release()
      Releases the native AudioTrack resources. | 
| 
        
        
        
        
        
        int | 
      reloadStaticData()
      Sets the playback head position within the static buffer to zero, that is it rewinds to start of static buffer. | 
| 
        
        
        
        
        
        void | 
      removeOnCodecFormatChangedListener(AudioTrack.OnCodecFormatChangedListener listener)
      Removes an  | 
| 
        
        
        
        
        
        void | 
      removeOnRoutingChangedListener(AudioTrack.OnRoutingChangedListener listener)
      
      This method was deprecated
      in API level 24.
    users should switch to the general purpose
              | 
| 
        
        
        
        
        
        void | 
      removeOnRoutingChangedListener(AudioRouting.OnRoutingChangedListener listener)
      Removes an  | 
| 
        
        
        
        
        
        boolean | 
      setAudioDescriptionMixLeveldB(float level)
      Sets the Audio Description mix level in dB. | 
| 
        
        
        
        
        
        int | 
      setAuxEffectSendLevel(float level)
      Sets the send level of the audio track to the attached auxiliary effect
  | 
| 
        
        
        
        
        
        int | 
      setBufferSizeInFrames(int bufferSizeInFrames)
      Limits the effective size of the  | 
| 
        
        
        
        
        
        boolean | 
      setDualMonoMode(int dualMonoMode)
      Sets the Dual Mono mode presentation on the output device. | 
| 
        
        
        
        
        
        void | 
      setLogSessionId(LogSessionId logSessionId)
      Sets a  | 
| 
        
        
        
        
        
        int | 
      setLoopPoints(int startInFrames, int endInFrames, int loopCount)
      Sets the loop points and the loop count. | 
| 
        
        
        
        
        
        int | 
      setNotificationMarkerPosition(int markerInFrames)
      Sets the position of the notification marker. | 
| 
        
        
        
        
        
        void | 
      setOffloadDelayPadding(int delayInFrames, int paddingInFrames)
      Configures the delay and padding values for the current compressed stream playing in offload mode. | 
| 
        
        
        
        
        
        void | 
      setOffloadEndOfStream()
      Declares that the last write() operation on this track provided the last buffer of this stream. | 
| 
        
        
        
        
        
        int | 
      setPlaybackHeadPosition(int positionInFrames)
      Sets the playback head position within the static buffer. | 
| 
        
        
        
        
        
        void | 
      setPlaybackParams(PlaybackParams params)
      Sets the playback parameters. | 
| 
        
        
        
        
        
        void | 
      setPlaybackPositionUpdateListener(AudioTrack.OnPlaybackPositionUpdateListener listener, Handler handler)
      Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. | 
| 
        
        
        
        
        
        void | 
      setPlaybackPositionUpdateListener(AudioTrack.OnPlaybackPositionUpdateListener listener)
      Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. | 
| 
        
        
        
        
        
        int | 
      setPlaybackRate(int sampleRateInHz)
      Sets the playback sample rate for this track. | 
| 
        
        
        
        
        
        int | 
      setPositionNotificationPeriod(int periodInFrames)
      Sets the period for the periodic notification event. | 
| 
        
        
        
        
        
        boolean | 
      setPreferredDevice(AudioDeviceInfo deviceInfo)
      Specifies an audio device (via an  | 
| 
        
        
        
        
        
        int | 
      setPresentation(AudioPresentation presentation)
      Sets the audio presentation. | 
| 
        
        
        
        
        
        int | 
      setStartThresholdInFrames(int startThresholdInFrames)
      Sets the streaming start threshold for an  | 
| 
        
        
        
        
        
        int | 
      setStereoVolume(float leftGain, float rightGain)
      
      This method was deprecated
      in API level 21.
    Applications should use  | 
| 
        
        
        
        
        
        int | 
      setVolume(float gain)
      Sets the specified output gain value on all channels of this track. | 
| 
        
        
        
        
        
        void | 
      stop()
      Stops playing the audio data. | 
| 
        
        
        
        
        
        void | 
      unregisterStreamEventCallback(AudioTrack.StreamEventCallback eventCallback)
      Unregisters the callback for notification of stream events, previously registered
 with  | 
| 
        
        
        
        
        
        int | 
      write(float[] audioData, int offsetInFloats, int sizeInFloats, int writeMode)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| 
        
        
        
        
        
        int | 
      write(short[] audioData, int offsetInShorts, int sizeInShorts)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| 
        
        
        
        
        
        int | 
      write(byte[] audioData, int offsetInBytes, int sizeInBytes)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| 
        
        
        
        
        
        int | 
      write(ByteBuffer audioData, int sizeInBytes, int writeMode)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| 
        
        
        
        
        
        int | 
      write(ByteBuffer audioData, int sizeInBytes, int writeMode, long timestamp)
      Writes the audio data to the audio sink for playback in streaming mode on a HW_AV_SYNC track. | 
| 
        
        
        
        
        
        int | 
      write(short[] audioData, int offsetInShorts, int sizeInShorts, int writeMode)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| 
        
        
        
        
        
        int | 
      write(byte[] audioData, int offsetInBytes, int sizeInBytes, int writeMode)
      Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). | 
| Protected methods | |
|---|---|
| 
        
        
        
        
        
        void | 
      finalize()
      Called by the garbage collector on an object when garbage collection determines that there are no more references to the object. | 
| 
        
        
        
        
        
        int | 
      getNativeFrameCount()
      
      This method was deprecated
      in API level 19.
    Use the identical public method  | 
| 
        
        
        
        
        
        void | 
      setState(int state)
      This method was deprecated in API level 19. Only accessible by subclasses, which are not recommended for AudioTrack. | 
| Inherited methods | |
|---|---|
Constants
DUAL_MONO_MODE_LL
public static final int DUAL_MONO_MODE_LL
This mode indicates that a stereo stream should be presented with the left audio channel replicated into the right audio channel. Behavior for non-stereo streams is implementation defined. A suggested guideline is that all channels with left-right stereo symmetry will have the left channel position replicated into the right channel position. The center channels (with no left/right symmetry) or unbalanced channels are left alone. The Dual Mono effect occurs before volume scaling.
Constant Value: 2 (0x00000002)
DUAL_MONO_MODE_LR
public static final int DUAL_MONO_MODE_LR
This mode indicates that a stereo stream should be presented with the left and right audio channels blended together and delivered to both channels. Behavior for non-stereo streams is implementation defined. A suggested guideline is that the left-right stereo symmetric channels are pairwise blended; the other channels such as center are left alone. The Dual Mono effect occurs before volume scaling.
Constant Value: 1 (0x00000001)
DUAL_MONO_MODE_OFF
public static final int DUAL_MONO_MODE_OFF
This mode disables any Dual Mono presentation effect.
Constant Value: 0 (0x00000000)
DUAL_MONO_MODE_RR
public static final int DUAL_MONO_MODE_RR
This mode indicates that a stereo stream should be presented with the right audio channel replicated into the left audio channel. Behavior for non-stereo streams is implementation defined. A suggested guideline is that all channels with left-right stereo symmetry will have the right channel position replicated into the left channel position. The center channels (with no left/right symmetry) or unbalanced channels are left alone. The Dual Mono effect occurs before volume scaling.
Constant Value: 3 (0x00000003)
ENCAPSULATION_METADATA_TYPE_DVB_AD_DESCRIPTOR
public static final int ENCAPSULATION_METADATA_TYPE_DVB_AD_DESCRIPTOR
Encapsulation metadata type for DVB AD descriptor. This metadata is formatted per ETSI TS 101 154 Table E.1: AD_descriptor.
Constant Value: 2 (0x00000002)
ENCAPSULATION_METADATA_TYPE_FRAMEWORK_TUNER
public static final int ENCAPSULATION_METADATA_TYPE_FRAMEWORK_TUNER
Encapsulation metadata type for framework tuner information. Refer to the Android Media TV Tuner API for details.
Constant Value: 1 (0x00000001)
ENCAPSULATION_METADATA_TYPE_SUPPLEMENTARY_AUDIO_PLACEMENT
public static final int ENCAPSULATION_METADATA_TYPE_SUPPLEMENTARY_AUDIO_PLACEMENT
Encapsulation metadata type for placement of supplementary audio.
 A 32 bit integer constant, one of SUPPLEMENTARY_AUDIO_PLACEMENT_NORMAL, SUPPLEMENTARY_AUDIO_PLACEMENT_LEFT, SUPPLEMENTARY_AUDIO_PLACEMENT_RIGHT.
Constant Value: 3 (0x00000003)
ENCAPSULATION_MODE_ELEMENTARY_STREAM
public static final int ENCAPSULATION_MODE_ELEMENTARY_STREAM
This mode indicates metadata encapsulation with an elementary stream payload. Both compressed and PCM format is allowed.
Constant Value: 1 (0x00000001)
ENCAPSULATION_MODE_NONE
public static final int ENCAPSULATION_MODE_NONE
This mode indicates no metadata encapsulation,
 which is the default mode for sending audio data
 through AudioTrack.
Constant Value: 0 (0x00000000)
ERROR
public static final int ERROR
Denotes a generic operation failure.
Constant Value: -1 (0xffffffff)
ERROR_BAD_VALUE
public static final int ERROR_BAD_VALUE
Denotes a failure due to the use of an invalid value.
Constant Value: -2 (0xfffffffe)
ERROR_DEAD_OBJECT
public static final int ERROR_DEAD_OBJECT
An error code indicating that the object reporting it is no longer valid and needs to be recreated.
Constant Value: -6 (0xfffffffa)
ERROR_INVALID_OPERATION
public static final int ERROR_INVALID_OPERATION
Denotes a failure due to the improper use of a method.
Constant Value: -3 (0xfffffffd)
MODE_STATIC
public static final int MODE_STATIC
Creation mode where audio data is transferred from Java to the native layer only once before the audio starts playing.
Constant Value: 0 (0x00000000)
MODE_STREAM
public static final int MODE_STREAM
Creation mode where audio data is streamed from Java to the native layer as the audio is playing.
Constant Value: 1 (0x00000001)
PERFORMANCE_MODE_LOW_LATENCY
public static final int PERFORMANCE_MODE_LOW_LATENCY
Low latency performance mode for an AudioTrack.
 If the device supports it, this mode
 enables a lower latency path through to the audio output sink.
 Effects may no longer work with such an AudioTrack and
 the sample rate must match that of the output sink.
 
 Applications should be aware that low latency requires careful
 buffer management, with smaller chunks of audio data written by each
 write() call.
 
 If this flag is used without specifying a bufferSizeInBytes then the
 AudioTrack's actual buffer size may be too small.
 It is recommended that a fairly
 large buffer should be specified when the AudioTrack is created.
 Then the actual size can be reduced by calling
 setBufferSizeInFrames(int). The buffer size can be optimized
 by lowering it after each write() call until the audio glitches,
 which is detected by calling
 getUnderrunCount(). Then the buffer size can be increased
 until there are no glitches.
 This tuning step should be done while playing silence.
 This technique provides a compromise between latency and glitch rate.
Constant Value: 1 (0x00000001)
PERFORMANCE_MODE_NONE
public static final int PERFORMANCE_MODE_NONE
Default performance mode for an AudioTrack.
Constant Value: 0 (0x00000000)
PERFORMANCE_MODE_POWER_SAVING
public static final int PERFORMANCE_MODE_POWER_SAVING
Power saving performance mode for an AudioTrack.
 If the device supports it, this
 mode will enable a lower power path to the audio output sink.
 In addition, this lower power path typically will have
 deeper internal buffers and better underrun resistance,
 with a tradeoff of higher latency.
 
 In this mode, applications should attempt to use a larger buffer size
 and deliver larger chunks of audio data per write() call.
 Use getBufferSizeInFrames() to determine
 the actual buffer size of the AudioTrack as it may have increased
 to accommodate a deeper buffer.
Constant Value: 2 (0x00000002)
PLAYSTATE_PAUSED
public static final int PLAYSTATE_PAUSED
indicates AudioTrack state is paused
Constant Value: 2 (0x00000002)
PLAYSTATE_PLAYING
public static final int PLAYSTATE_PLAYING
indicates AudioTrack state is playing
Constant Value: 3 (0x00000003)
PLAYSTATE_STOPPED
public static final int PLAYSTATE_STOPPED
indicates AudioTrack state is stopped
Constant Value: 1 (0x00000001)
STATE_INITIALIZED
public static final int STATE_INITIALIZED
State of an AudioTrack that is ready to be used.
Constant Value: 1 (0x00000001)
STATE_NO_STATIC_DATA
public static final int STATE_NO_STATIC_DATA
State of a successfully initialized AudioTrack that uses static data, but that hasn't received that data yet.
Constant Value: 2 (0x00000002)
STATE_UNINITIALIZED
public static final int STATE_UNINITIALIZED
State of an AudioTrack that was not successfully initialized upon creation.
Constant Value: 0 (0x00000000)
SUCCESS
public static final int SUCCESS
Denotes a successful operation.
Constant Value: 0 (0x00000000)
SUPPLEMENTARY_AUDIO_PLACEMENT_LEFT
public static final int SUPPLEMENTARY_AUDIO_PLACEMENT_LEFT
Supplementary audio placement left.
Constant Value: 1 (0x00000001)
SUPPLEMENTARY_AUDIO_PLACEMENT_NORMAL
public static final int SUPPLEMENTARY_AUDIO_PLACEMENT_NORMAL
Supplementary audio placement normal.
Constant Value: 0 (0x00000000)
SUPPLEMENTARY_AUDIO_PLACEMENT_RIGHT
public static final int SUPPLEMENTARY_AUDIO_PLACEMENT_RIGHT
Supplementary audio placement right.
Constant Value: 2 (0x00000002)
WRITE_BLOCKING
public static final int WRITE_BLOCKING
The write mode indicating the write operation will block until all data has been written,
 to be used as the actual value of the writeMode parameter in
 write(byte[], int, int, int), write(short[], int, int, int),
 write(float[], int, int, int), write(java.nio.ByteBuffer, int, int), and
 write(java.nio.ByteBuffer, int, int, long).
Constant Value: 0 (0x00000000)
WRITE_NON_BLOCKING
public static final int WRITE_NON_BLOCKING
The write mode indicating the write operation will return immediately after
 queuing as much audio data for playback as possible without blocking,
 to be used as the actual value of the writeMode parameter in
 write(java.nio.ByteBuffer, int, int), write(short[], int, int, int),
 write(float[], int, int, int), write(java.nio.ByteBuffer, int, int), and
 write(java.nio.ByteBuffer, int, int, long).
Constant Value: 1 (0x00000001)
Public constructors
AudioTrack
public AudioTrack (AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes, int mode, int sessionId)
Class constructor with AudioAttributes and AudioFormat.
| Parameters | |
|---|---|
| attributes | AudioAttributes: a non-nullAudioAttributesinstance. | 
| format | AudioFormat: a non-nullAudioFormatinstance describing the format of the data
     that will be played through this AudioTrack. SeeAudioFormat.Builderfor
     configuring the audio format parameters such as encoding, channel mask and sample rate. | 
| bufferSizeInBytes | int: the total size (in bytes) of the internal buffer where audio data is
   read from for playback. This should be a nonzero multiple of the frame size in bytes. If the track's creation mode is   If the track's creation mode is  | 
| mode | int: streaming or static buffer. SeeMODE_STATICandMODE_STREAM. | 
| sessionId | int: ID of audio session the AudioTrack must be attached to, orAudioManager.AUDIO_SESSION_ID_GENERATEif the session isn't known at construction
   time. See alsoAudioManager.generateAudioSessionId()to obtain a session ID before
   construction. | 
| Throws | |
|---|---|
|  | java.lang.IllegalArgumentException | 
| IllegalArgumentException | |
AudioTrack
public AudioTrack (int streamType, 
                int sampleRateInHz, 
                int channelConfig, 
                int audioFormat, 
                int bufferSizeInBytes, 
                int mode)
      This constructor is deprecated.
    use Builder or
   AudioTrack(android.media.AudioAttributes, android.media.AudioFormat, int, int, int) to specify the
   AudioAttributes instead of the stream type which is only for volume control.
  
Class constructor.
| Parameters | |
|---|---|
| streamType | int: the type of the audio stream. SeeAudioManager.STREAM_VOICE_CALL,AudioManager.STREAM_SYSTEM,AudioManager.STREAM_RING,AudioManager.STREAM_MUSIC,AudioManager.STREAM_ALARM, andAudioManager.STREAM_NOTIFICATION. | 
| sampleRateInHz | int: the initial source sample rate expressed in Hz.AudioFormat.SAMPLE_RATE_UNSPECIFIEDmeans to use a route-dependent value
   which is usually the sample rate of the sink.getSampleRate()can be used to retrieve the actual sample rate chosen. | 
| channelConfig | int: describes the configuration of the audio channels.
   SeeAudioFormat.CHANNEL_OUT_MONOandAudioFormat.CHANNEL_OUT_STEREO | 
| audioFormat | int: the format in which the audio data is represented.
   SeeAudioFormat.ENCODING_PCM_16BIT,AudioFormat.ENCODING_PCM_8BIT,
   andAudioFormat.ENCODING_PCM_FLOAT. | 
| bufferSizeInBytes | int: the total size (in bytes) of the internal buffer where audio data is
   read from for playback. This should be a nonzero multiple of the frame size in bytes. If the track's creation mode is   If the track's creation mode is  | 
| mode | int: streaming or static buffer. SeeMODE_STATICandMODE_STREAM | 
| Throws | |
|---|---|
|  | java.lang.IllegalArgumentException | 
| IllegalArgumentException | |
AudioTrack
public AudioTrack (int streamType, 
                int sampleRateInHz, 
                int channelConfig, 
                int audioFormat, 
                int bufferSizeInBytes, 
                int mode, 
                int sessionId)
      This constructor is deprecated.
    use Builder or
   AudioTrack(android.media.AudioAttributes, android.media.AudioFormat, int, int, int) to specify the
   AudioAttributes instead of the stream type which is only for volume control.
  
Class constructor with audio session. Use this constructor when the AudioTrack must be
 attached to a particular audio session. The primary use of the audio session ID is to
 associate audio effects to a particular instance of AudioTrack: if an audio session ID
 is provided when creating an AudioEffect, this effect will be applied only to audio tracks
 and media players in the same session and not to the output mix.
 When an AudioTrack is created without specifying a session, it will create its own session
 which can be retrieved by calling the getAudioSessionId() method.
 If a non-zero session ID is provided, this AudioTrack will share effects attached to this
 session
 with all other media players or audio tracks in the same session, otherwise a new session
 will be created for this track if none is supplied.
| Parameters | |
|---|---|
| streamType | int: the type of the audio stream. SeeAudioManager.STREAM_VOICE_CALL,AudioManager.STREAM_SYSTEM,AudioManager.STREAM_RING,AudioManager.STREAM_MUSIC,AudioManager.STREAM_ALARM, andAudioManager.STREAM_NOTIFICATION. | 
| sampleRateInHz | int: the initial source sample rate expressed in Hz.AudioFormat.SAMPLE_RATE_UNSPECIFIEDmeans to use a route-dependent value
   which is usually the sample rate of the sink. | 
| channelConfig | int: describes the configuration of the audio channels.
   SeeAudioFormat.CHANNEL_OUT_MONOandAudioFormat.CHANNEL_OUT_STEREO | 
| audioFormat | int: the format in which the audio data is represented.
   SeeAudioFormat.ENCODING_PCM_16BITandAudioFormat.ENCODING_PCM_8BIT,
   andAudioFormat.ENCODING_PCM_FLOAT. | 
| bufferSizeInBytes | int: the total size (in bytes) of the internal buffer where audio data is
   read from for playback. This should be a nonzero multiple of the frame size in bytes. If the track's creation mode is   If the track's creation mode is  | 
| mode | int: streaming or static buffer. SeeMODE_STATICandMODE_STREAM | 
| sessionId | int: Id of audio session the AudioTrack must be attached to | 
| Throws | |
|---|---|
|  | java.lang.IllegalArgumentException | 
| IllegalArgumentException | |
Public methods
addOnCodecFormatChangedListener
public void addOnCodecFormatChangedListener (Executor executor, AudioTrack.OnCodecFormatChangedListener listener)
Adds an OnCodecFormatChangedListener to receive notifications of
 codec format change events on this AudioTrack.
| Parameters | |
|---|---|
| executor | Executor: Specifies theExecutorobject to control execution.
 This value cannot benull.
 Callback and listener events are dispatched through thisExecutor, providing an easy way to control which thread is
 used. To dispatch events through the main thread of your
 application, you can useContext.getMainExecutor().
 Otherwise, provide anExecutorthat dispatches to an appropriate thread. | 
| listener | AudioTrack.OnCodecFormatChangedListener: TheOnCodecFormatChangedListenerinterface to receive
     notifications of codec events.
 This value cannot benull. | 
addOnRoutingChangedListener
public void addOnRoutingChangedListener (AudioTrack.OnRoutingChangedListener listener, Handler handler)
      This method was deprecated
      in API level 24.
    users should switch to the general purpose
             AudioRouting.OnRoutingChangedListener class instead.
  
Adds an OnRoutingChangedListener to receive notifications of routing changes
 on this AudioTrack.
| Parameters | |
|---|---|
| listener | AudioTrack.OnRoutingChangedListener: TheOnRoutingChangedListenerinterface to receive notifications
 of rerouting events. | 
| handler | Handler: Specifies theHandlerobject for the thread on which to execute
 the callback. Ifnull, theHandlerassociated with the mainLooperwill be used. | 
addOnRoutingChangedListener
public void addOnRoutingChangedListener (AudioRouting.OnRoutingChangedListener listener, Handler handler)
Adds an AudioRouting.OnRoutingChangedListener to receive notifications of routing
 changes on this AudioTrack.
| Parameters | |
|---|---|
| listener | AudioRouting.OnRoutingChangedListener: TheAudioRouting.OnRoutingChangedListenerinterface to receive
 notifications of rerouting events. | 
| handler | Handler: Specifies theHandlerobject for the thread on which to execute
 the callback. Ifnull, theHandlerassociated with the mainLooperwill be used. | 
attachAuxEffect
public int attachAuxEffect (int effectId)
Attaches an auxiliary effect to the audio track. A typical auxiliary effect is a reverberation effect which can be applied on any sound source that directs a certain amount of its energy to this effect. This amount is defined by setAuxEffectSendLevel(). .
After creating an auxiliary effect (e.g.
 EnvironmentalReverb), retrieve its ID with
 AudioEffect.getId() and use it when calling
 this method to attach the audio track to the effect.
 
To detach the effect from the audio track, call this method with a null effect id.
| Parameters | |
|---|---|
| effectId | int: system wide unique id of the effect to attach | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_INVALID_OPERATION,ERROR_BAD_VALUE | 
See also:
createVolumeShaper
public VolumeShaper createVolumeShaper (VolumeShaper.Configuration configuration)
Returns a VolumeShaper object that can be used modify the volume envelope
 of the player or track.
| Parameters | |
|---|---|
| configuration | VolumeShaper.Configuration: This value cannot benull. | 
| Returns | |
|---|---|
| VolumeShaper | This value cannot be null. | 
flush
public void flush ()
Flushes the audio data currently queued for playback. Any data that has
 been written but not yet presented will be discarded.  No-op if not stopped or paused,
 or if the track's creation mode is not MODE_STREAM.
 
 Note that although data written but not yet presented is discarded, there is no
 guarantee that all of the buffer space formerly used by that data
 is available for a subsequent write.
 For example, a call to write(byte[], int, int) with sizeInBytes
 less than or equal to the total buffer size
 may return a short actual transfer count.
getAudioAttributes
public AudioAttributes getAudioAttributes ()
Returns the AudioAttributes used in configuration.
 If a streamType is used instead of an AudioAttributes
 to configure the AudioTrack
 (the use of streamType for configuration is deprecated),
 then the AudioAttributes
 equivalent to the streamType is returned.
| Returns | |
|---|---|
| AudioAttributes | The AudioAttributesused to configure the AudioTrack.
 This value cannot benull. | 
| Throws | |
|---|---|
| IllegalStateException | If the track is not initialized. | 
getAudioDescriptionMixLeveldB
public float getAudioDescriptionMixLeveldB ()
Returns the Audio Description mix level in dB.
 If Audio Description mixing is unavailable from the hardware device,
 a value of Float.NEGATIVE_INFINITY is returned.
| Returns | |
|---|---|
| float | the current Audio Description Mix Level in dB.
     A value of Float.NEGATIVE_INFINITYmeans
     that the audio description is not mixed or
     the hardware is not available.
     This should reflect the true internal device mix level;
     hence the application might receive any floating value
     exceptFloat.NaN. | 
getAudioFormat
public int getAudioFormat ()
Returns the configured audio data encoding. See AudioFormat.ENCODING_PCM_8BIT,
 AudioFormat.ENCODING_PCM_16BIT, and AudioFormat.ENCODING_PCM_FLOAT.
| Returns | |
|---|---|
| int | |
getAudioSessionId
public int getAudioSessionId ()
Returns the audio session ID.
| Returns | |
|---|---|
| int | the ID of the audio session this AudioTrack belongs to. | 
getBufferCapacityInFrames
public int getBufferCapacityInFrames ()
Returns the maximum size of the AudioTrack buffer in frames.
  
 If the track's creation mode is MODE_STATIC,
  it is equal to the specified bufferSizeInBytes on construction, converted to frame units.
  A static track's frame count will not change.
  
 If the track's creation mode is MODE_STREAM,
  it is greater than or equal to the specified bufferSizeInBytes converted to frame units.
  For streaming tracks, this value may be rounded up to a larger value if needed by
  the target output sink, and
  if the track is subsequently routed to a different output sink, the
  frame count may enlarge to accommodate.
  
 If the AudioTrack encoding indicates compressed data,
  e.g. AudioFormat.ENCODING_AC3, then the frame count returned is
  the size of the AudioTrack buffer in bytes.
  
 See also AudioManager.getProperty(String) for key
  AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER.
| Returns | |
|---|---|
| int | maximum size in frames of the AudioTrackbuffer.
 Value is 0 or greater | 
| Throws | |
|---|---|
| IllegalStateException | if track is not initialized. | 
getBufferSizeInFrames
public int getBufferSizeInFrames ()
Returns the effective size of the AudioTrack buffer
 that the application writes to.
 
 This will be less than or equal to the result of
 getBufferCapacityInFrames().
 It will be equal if setBufferSizeInFrames(int) has never been called.
 
If the track is subsequently routed to a different output sink, the buffer size and capacity may enlarge to accommodate.
 If the AudioTrack encoding indicates compressed data,
 e.g. AudioFormat.ENCODING_AC3, then the frame count returned is
 the size of the AudioTrack buffer in bytes.
 
 See also AudioManager.getProperty(String) for key
 AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER.
| Returns | |
|---|---|
| int | current size in frames of the AudioTrackbuffer.
 Value is 0 or greater | 
| Throws | |
|---|---|
| IllegalStateException | if track is not initialized. | 
getChannelConfiguration
public int getChannelConfiguration ()
Returns the configured channel position mask.
 For example, refer to AudioFormat.CHANNEL_OUT_MONO,
 AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.CHANNEL_OUT_5POINT1.
 This method may return AudioFormat.CHANNEL_INVALID if
 a channel index mask was used. Consider
 getFormat() instead, to obtain an AudioFormat,
 which contains both the channel position mask and the channel index mask.
| Returns | |
|---|---|
| int | |
getChannelCount
public int getChannelCount ()
Returns the configured number of channels.
| Returns | |
|---|---|
| int | |
getDualMonoMode
public int getDualMonoMode ()
Returns the Dual Mono mode presentation setting.
 If no Dual Mono presentation is available for the output device,
 then DUAL_MONO_MODE_OFF is returned.
| Returns | |
|---|---|
| int | one of DUAL_MONO_MODE_OFF,DUAL_MONO_MODE_LR,DUAL_MONO_MODE_LL,DUAL_MONO_MODE_RR.
 Value isDUAL_MONO_MODE_OFF,DUAL_MONO_MODE_LR,DUAL_MONO_MODE_LL, orDUAL_MONO_MODE_RR | 
getFormat
public AudioFormat getFormat ()
Returns the configured AudioTrack format.
| Returns | |
|---|---|
| AudioFormat | an AudioFormatcontaining theAudioTrackparameters at the time of configuration.
 This value cannot benull. | 
getLogSessionId
public LogSessionId getLogSessionId ()
Returns the LogSessionId.
| Returns | |
|---|---|
| LogSessionId | This value cannot be null. | 
getMaxVolume
public static float getMaxVolume ()
Returns the maximum gain value, which is greater than or equal to 1.0. Gain values greater than the maximum will be clamped to the maximum.
The word "volume" in the API name is historical; this is actually a gain. expressed as a linear multiplier on sample values, where a maximum value of 1.0 corresponds to a gain of 0 dB (sample values left unmodified).
| Returns | |
|---|---|
| float | the maximum value, which is greater than or equal to 1.0. | 
getMetrics
public PersistableBundle getMetrics ()
Return Metrics data about the current AudioTrack instance.
| Returns | |
|---|---|
| PersistableBundle | a PersistableBundlecontaining the set of attributes and values
 available for the media being handled by this instance of AudioTrack
 The attributes are descibed inMetricsConstants.
 Additional vendor-specific fields may also be present in
 the return value. | 
getMinBufferSize
public static int getMinBufferSize (int sampleRateInHz, 
                int channelConfig, 
                int audioFormat)Returns the estimated minimum buffer size required for an AudioTrack
 object to be created in the MODE_STREAM mode.
 The size is an estimate because it does not consider either the route or the sink,
 since neither is known yet.  Note that this size doesn't
 guarantee a smooth playback under load, and higher values should be chosen according to
 the expected frequency at which the buffer will be refilled with additional data to play.
 For example, if you intend to dynamically set the source sample rate of an AudioTrack
 to a higher value than the initial source sample rate, be sure to configure the buffer size
 based on the highest planned sample rate.
| Parameters | |
|---|---|
| sampleRateInHz | int: the source sample rate expressed in Hz.AudioFormat.SAMPLE_RATE_UNSPECIFIEDis not permitted. | 
| channelConfig | int: describes the configuration of the audio channels.
   SeeAudioFormat.CHANNEL_OUT_MONOandAudioFormat.CHANNEL_OUT_STEREO | 
| audioFormat | int: the format in which the audio data is represented.
   SeeAudioFormat.ENCODING_PCM_16BITandAudioFormat.ENCODING_PCM_8BIT,
   andAudioFormat.ENCODING_PCM_FLOAT. | 
| Returns | |
|---|---|
| int | ERROR_BAD_VALUEif an invalid parameter was passed,
   orERRORif unable to query for output properties,
   or the minimum buffer size expressed in bytes. | 
getMinVolume
public static float getMinVolume ()
Returns the minimum gain value, which is the constant 0.0. Gain values less than 0.0 will be clamped to 0.0.
The word "volume" in the API name is historical; this is actually a linear gain.
| Returns | |
|---|---|
| float | the minimum value, which is the constant 0.0. | 
getNativeOutputSampleRate
public static int getNativeOutputSampleRate (int streamType)
Returns the output sample rate in Hz for the specified stream type.
| Parameters | |
|---|---|
| streamType | int | 
| Returns | |
|---|---|
| int | |
getNotificationMarkerPosition
public int getNotificationMarkerPosition ()
Returns marker position expressed in frames.
| Returns | |
|---|---|
| int | marker position in wrapping frame units similar to getPlaybackHeadPosition(),
 or zero if marker is disabled. | 
getOffloadDelay
public int getOffloadDelay ()
Return the decoder delay of an offloaded track, expressed in frames, previously set with
 setOffloadDelayPadding(int, int), or 0 if it was never modified.
 
This delay indicates the number of frames to be ignored at the beginning of the stream.
 This value can only be queried on a track successfully initialized with
 AudioTrack.Builder.setOffloadedPlayback(boolean).
| Returns | |
|---|---|
| int | decoder delay expressed in frames. Value is 0 or greater | 
getOffloadPadding
public int getOffloadPadding ()
Return the decoder padding of an offloaded track, expressed in frames, previously set with
 setOffloadDelayPadding(int, int), or 0 if it was never modified.
 
This padding indicates the number of frames to be ignored at the end of the stream.
 This value can only be queried on a track successfully initialized with
 AudioTrack.Builder.setOffloadedPlayback(boolean).
| Returns | |
|---|---|
| int | decoder padding expressed in frames. Value is 0 or greater | 
getPerformanceMode
public int getPerformanceMode ()
Returns the current performance mode of the AudioTrack.
| Returns | |
|---|---|
| int | one of AudioTrack.PERFORMANCE_MODE_NONE,AudioTrack.PERFORMANCE_MODE_LOW_LATENCY,
 orAudioTrack.PERFORMANCE_MODE_POWER_SAVING.
 UseAudioTrack.Builder.setPerformanceModein theAudioTrack.Builderto enable a performance mode.
 Value isPERFORMANCE_MODE_NONE,PERFORMANCE_MODE_LOW_LATENCY, orPERFORMANCE_MODE_POWER_SAVING | 
| Throws | |
|---|---|
| IllegalStateException | if track is not initialized. | 
getPlayState
public int getPlayState ()
Returns the playback state of the AudioTrack instance.
| Returns | |
|---|---|
| int | |
getPlaybackHeadPosition
public int getPlaybackHeadPosition ()
Returns the playback head position expressed in frames.
 Though the "int" type is signed 32-bits, the value should be reinterpreted as if it is
 unsigned 32-bits.  That is, the next position after 0x7FFFFFFF is (int) 0x80000000.
 This is a continuously advancing counter.  It will wrap (overflow) periodically,
 for example approximately once every 27:03:11 hours:minutes:seconds at 44.1 kHz.
 It is reset to zero by flush(), reloadStaticData(), and stop().
 If the track's creation mode is MODE_STATIC, the return value indicates
 the total number of frames played since reset,
 not the current offset within the buffer.
| Returns | |
|---|---|
| int | |
getPlaybackParams
public PlaybackParams getPlaybackParams ()
Returns the current playback parameters.
 See setPlaybackParams(android.media.PlaybackParams) to set playback parameters
| Returns | |
|---|---|
| PlaybackParams | current PlaybackParams.
 This value cannot benull. | 
| Throws | |
|---|---|
| IllegalStateException | if track is not initialized. | 
getPlaybackRate
public int getPlaybackRate ()
Returns the current playback sample rate rate in Hz.
| Returns | |
|---|---|
| int | |
getPositionNotificationPeriod
public int getPositionNotificationPeriod ()
Returns the notification update period expressed in frames. Zero means that no position update notifications are being delivered.
| Returns | |
|---|---|
| int | |
getPreferredDevice
public AudioDeviceInfo getPreferredDevice ()
Returns the selected output specified by setPreferredDevice(AudioDeviceInfo). Note that this
 is not guaranteed to correspond to the actual device being used for playback.
| Returns | |
|---|---|
| AudioDeviceInfo | |
getRoutedDevice
public AudioDeviceInfo getRoutedDevice ()
Returns an AudioDeviceInfo identifying the current routing of this AudioTrack.
 Note: The query is only valid if the AudioTrack is currently playing. If it is not,
 getRoutedDevice() will return null.
 Audio may play on multiple devices simultaneously (e.g. an alarm playing on headphones and
 speaker on a phone), so prefer using getRoutedDevices().
| Returns | |
|---|---|
| AudioDeviceInfo | |
getRoutedDevices
public List<AudioDeviceInfo> getRoutedDevices ()
Returns a List of AudioDeviceInfo identifying the current routing of this
 AudioTrack.
 Note: The query is only valid if the AudioTrack is currently playing. If it is not,
 getRoutedDevices() will return an empty list.
| Returns | |
|---|---|
| List<AudioDeviceInfo> | This value cannot be null. | 
getSampleRate
public int getSampleRate ()
Returns the configured audio source sample rate in Hz.
 The initial source sample rate depends on the constructor parameters,
 but the source sample rate may change if setPlaybackRate(int) is called.
 If the constructor had a specific sample rate, then the initial sink sample rate is that
 value.
 If the constructor had AudioFormat.SAMPLE_RATE_UNSPECIFIED,
 then the initial sink sample rate is a route-dependent default value based on the source [sic].
| Returns | |
|---|---|
| int | |
getStartThresholdInFrames
public int getStartThresholdInFrames ()
Returns the streaming start threshold of the AudioTrack.
 
 The streaming start threshold is the buffer level that the written audio
 data must reach for audio streaming to start after play() is called.
 When an AudioTrack is created, the streaming start threshold
 is the buffer capacity in frames. If the buffer size in frames is reduced
 by setBufferSizeInFrames(int) to a value smaller than the start threshold
 then that value will be used instead for the streaming start threshold.
 
For compressed streams, the size of a frame is considered to be exactly one byte.
| Returns | |
|---|---|
| int | the current start threshold in frames value. This is
         an integer between 1 to the buffer capacity
         (see getBufferCapacityInFrames()),
         and might change if the  output sink changes after track creation.
 Value is 1 or greater | 
| Throws | |
|---|---|
| IllegalStateException | if the track is not initialized or the
         track is not MODE_STREAM. | 
See also:
getState
public int getState ()
Returns the state of the AudioTrack instance. This is useful after the AudioTrack instance has been created to check if it was initialized properly. This ensures that the appropriate resources have been acquired.
| Returns | |
|---|---|
| int | |
getStreamType
public int getStreamType ()
Returns the volume stream type of this AudioTrack.
 Compare the result against AudioManager.STREAM_VOICE_CALL,
 AudioManager.STREAM_SYSTEM, AudioManager.STREAM_RING,
 AudioManager.STREAM_MUSIC, AudioManager.STREAM_ALARM,
 AudioManager.STREAM_NOTIFICATION, AudioManager.STREAM_DTMF or
 AudioManager.STREAM_ACCESSIBILITY.
| Returns | |
|---|---|
| int | |
getTimestamp
public boolean getTimestamp (AudioTimestamp timestamp)
Poll for a timestamp on demand.
If you need to track timestamps during initial warmup or after a routing or mode change, you should request a new timestamp periodically until the reported timestamps show that the frame position is advancing, or until it becomes clear that timestamps are unavailable for this route.
After the clock is advancing at a stable rate, query for a new timestamp approximately once every 10 seconds to once per minute. Calling this method more often is inefficient. It is also counter-productive to call this method more often than recommended, because the short-term differences between successive timestamp reports are not meaningful. If you need a high-resolution mapping between frame position and presentation time, consider implementing that at application level, based on low-resolution timestamps.
The audio data at the returned position may either already have been presented, or may have not yet been presented but is committed to be presented. It is not possible to request the time corresponding to a particular position, or to request the (fractional) position corresponding to a particular time. If you need such features, consider implementing them at application level.
| Parameters | |
|---|---|
| timestamp | AudioTimestamp: a reference to a non-null AudioTimestamp instance allocated
        and owned by caller. | 
| Returns | |
|---|---|
| boolean | true if a timestamp is available, or false if no timestamp is available.
         If a timestamp is available,
         the AudioTimestamp instance is filled in with a position in frame units, together
         with the estimated time when that frame was presented or is committed to
         be presented.
         In the case that no timestamp is available, any supplied instance is left unaltered.
         A timestamp may be temporarily unavailable while the audio clock is stabilizing,
         or during and immediately after a route change.
         A timestamp is permanently unavailable for a given route if the route does not support
         timestamps.  In this case, the approximate frame position can be obtained
         using getPlaybackHeadPosition().
         However, it may be useful to continue to query for
         timestamps occasionally, to recover after a route change. | 
getUnderrunCount
public int getUnderrunCount ()
Returns the number of underrun occurrences in the application-level write buffer since the AudioTrack was created. An underrun occurs if the application does not write audio data quickly enough, causing the buffer to underflow and a potential audio glitch or pop.
 Underruns are less likely when buffer sizes are large.
 It may be possible to eliminate underruns by recreating the AudioTrack with
 a larger buffer.
 Or by using setBufferSizeInFrames(int) to dynamically increase the
 effective size of the buffer.
| Returns | |
|---|---|
| int | |
isDirectPlaybackSupported
public static boolean isDirectPlaybackSupported (AudioFormat format, AudioAttributes attributes)
      This method was deprecated
      in API level 33.
    Use AudioManager.getDirectPlaybackSupport(AudioFormat, AudioAttributes)
             instead.
  
Returns whether direct playback of an audio format with the provided attributes is currently supported on the system.
Direct playback means that the audio stream is not resampled or downmixed by the framework. Checking for direct support can help the app select the representation of audio content that most closely matches the capabilities of the device and peripherials (e.g. A/V receiver) connected to it. Note that the provided stream can still be re-encoded or mixed with other streams, if needed.
Also note that this query only provides information about the support of an audio format. It does not indicate whether the resources necessary for the playback are available at that instant.
| Parameters | |
|---|---|
| format | AudioFormat: a non-nullAudioFormatinstance describing the format of
   the audio data. | 
| attributes | AudioAttributes: a non-nullAudioAttributesinstance. | 
| Returns | |
|---|---|
| boolean | true if the given audio format can be played directly. | 
isOffloadedPlayback
public boolean isOffloadedPlayback ()
Returns whether the track was built with Builder.setOffloadedPlayback(boolean) set
 to true.
| Returns | |
|---|---|
| boolean | true if the track is using offloaded playback. | 
pause
public void pause ()
Pauses the playback of the audio data. Data that has not been played
 back will not be discarded. Subsequent calls to play() will play
 this data back. See flush() to discard this data.
| Throws | |
|---|---|
|  | java.lang.IllegalStateException | 
| IllegalStateException | |
play
public void play ()
Starts playing an AudioTrack.
 If track's creation mode is MODE_STATIC, you must have called one of
 the write methods (write(byte[], int, int), write(byte[], int, int, int),
 write(short[], int, int), write(short[], int, int, int),
 write(float[], int, int, int), or write(java.nio.ByteBuffer, int, int)) prior to
 play().
 
 If the mode is MODE_STREAM, you can optionally prime the data path prior to
 calling play(), by writing up to bufferSizeInBytes (from constructor).
 If you don't call write() first, or if you call write() but with an insufficient amount of
 data, then the track will be in underrun state at play().  In this case,
 playback will not actually start playing until the data path is filled to a
 device-specific minimum level.  This requirement for the path to be filled
 to a minimum level is also true when resuming audio playback after calling stop().
 Similarly the buffer will need to be filled up again after
 the track underruns due to failure to call write() in a timely manner with sufficient data.
 For portability, an application should prime the data path to the maximum allowed
 by writing data until the write() method returns a short transfer count.
 This allows play() to start immediately, and reduces the chance of underrun.
 As of Build.VERSION_CODES.S the minimum level to start playing
 can be obtained using getStartThresholdInFrames() and set with
 setStartThresholdInFrames(int).
| Throws | |
|---|---|
| IllegalStateException | if the track isn't properly initialized | 
registerStreamEventCallback
public void registerStreamEventCallback (Executor executor, AudioTrack.StreamEventCallback eventCallback)
Registers a callback for the notification of stream events.
 This callback can only be registered for instances operating in offloaded mode
 (see AudioTrack.Builder.setOffloadedPlayback(boolean) and
 AudioManager.isOffloadedPlaybackSupported(AudioFormat,AudioAttributes) for
 more details).
| Parameters | |
|---|---|
| executor | Executor:Executorto handle the callbacks.
 This value cannot benull.
 Callback and listener events are dispatched through thisExecutor, providing an easy way to control which thread is
 used. To dispatch events through the main thread of your
 application, you can useContext.getMainExecutor().
 Otherwise, provide anExecutorthat dispatches to an appropriate thread. | 
| eventCallback | AudioTrack.StreamEventCallback: the callback to receive the stream event notifications.
 This value cannot benull. | 
reloadStaticData
public int reloadStaticData ()
Sets the playback head position within the static buffer to zero,
 that is it rewinds to start of static buffer.
 The track must be stopped or paused, and
 the track's creation mode must be MODE_STATIC.
 
 As of Build.VERSION_CODES.M, also resets the value returned by
 getPlaybackHeadPosition() to zero.
 For earlier API levels, the reset behavior is unspecified.
 
 Use setPlaybackHeadPosition(int) with a zero position
 if the reset of getPlaybackHeadPosition() is not needed.
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
removeOnCodecFormatChangedListener
public void removeOnCodecFormatChangedListener (AudioTrack.OnCodecFormatChangedListener listener)
Removes an OnCodecFormatChangedListener which has been previously added
 to receive codec format change events.
| Parameters | |
|---|---|
| listener | AudioTrack.OnCodecFormatChangedListener: The previously addedOnCodecFormatChangedListenerinterface
 to remove.
 This value cannot benull. | 
removeOnRoutingChangedListener
public void removeOnRoutingChangedListener (AudioTrack.OnRoutingChangedListener listener)
      This method was deprecated
      in API level 24.
    users should switch to the general purpose
             AudioRouting.OnRoutingChangedListener class instead.
  
Removes an OnRoutingChangedListener which has been previously added
 to receive rerouting notifications.
| Parameters | |
|---|---|
| listener | AudioTrack.OnRoutingChangedListener: The previously addedOnRoutingChangedListenerinterface to remove. | 
removeOnRoutingChangedListener
public void removeOnRoutingChangedListener (AudioRouting.OnRoutingChangedListener listener)
Removes an AudioRouting.OnRoutingChangedListener which has been previously added
 to receive rerouting notifications.
| Parameters | |
|---|---|
| listener | AudioRouting.OnRoutingChangedListener: The previously addedAudioRouting.OnRoutingChangedListenerinterface
 to remove. | 
setAudioDescriptionMixLeveldB
public boolean setAudioDescriptionMixLeveldB (float level)
Sets the Audio Description mix level in dB.
 For AudioTracks incorporating a secondary Audio Description stream
 (where such contents may be sent through an Encapsulation Mode
 other than ENCAPSULATION_MODE_NONE).
 or internally by a HW channel),
 the level of mixing of the Audio Description to the Main Audio stream
 is controlled by this method.
 Such mixing occurs prior to overall volume scaling.
| Parameters | |
|---|---|
| level | float: a floating point value betweenFloat.NEGATIVE_INFINITYto+48.f,
     whereFloat.NEGATIVE_INFINITYmeans the Audio Description is not mixed
     and a level of0.fmeans the Audio Description is mixed without scaling.
 Value is 48.0f or less | 
| Returns | |
|---|---|
| boolean | true on success, false on failure. | 
setAuxEffectSendLevel
public int setAuxEffectSendLevel (float level)
Sets the send level of the audio track to the attached auxiliary effect
 attachAuxEffect(int).  Effect levels
 are clamped to the closed interval [0.0, max] where
 max is the value of getMaxVolume().
 A value of 0.0 results in no effect, and a value of 1.0 is full send.
 
By default the send level is 0.0f, so even if an effect is attached to the player this method must be called for the effect to be applied.
Note that the passed level value is a linear scalar. UI controls should be scaled logarithmically: the gain applied by audio framework ranges from -72dB to at least 0dB, so an appropriate conversion from linear UI input x to level is: x == 0 -> level = 0 0 < x <= R -> level = 10^(72*(x-R)/20/R)
| Parameters | |
|---|---|
| level | float: linear send level
 Value is 0.0f or greater | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_INVALID_OPERATION,ERROR | 
setBufferSizeInFrames
public int setBufferSizeInFrames (int bufferSizeInFrames)
Limits the effective size of the AudioTrack buffer
 that the application writes to.
 
A write to this AudioTrack will not fill the buffer beyond this limit. If a blocking write is used then the write will block until the data can fit within this limit.
Changing this limit modifies the latency associated with the buffer for this track. A smaller size will give lower latency but there may be more glitches due to buffer underruns.
The actual size used may not be equal to this requested size.
 It will be limited to a valid range with a maximum of
 getBufferCapacityInFrames().
 It may also be adjusted slightly for internal reasons.
 If bufferSizeInFrames is less than zero then ERROR_BAD_VALUE
 will be returned.
 
This method is supported for PCM audio at all API levels. Compressed audio is supported in API levels 33 and above. For compressed streams the size of a frame is considered to be exactly one byte.
| Parameters | |
|---|---|
| bufferSizeInFrames | int: requested buffer size in frames
 Value is 0 or greater | 
| Returns | |
|---|---|
| int | the actual buffer size in frames or an error code, ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
| Throws | |
|---|---|
| IllegalStateException | if track is not initialized. | 
setDualMonoMode
public boolean setDualMonoMode (int dualMonoMode)
Sets the Dual Mono mode presentation on the output device. The Dual Mono mode is generally applied to stereo audio streams where the left and right channels come from separate sources. For compressed audio, where the decoding is done in hardware, Dual Mono presentation needs to be performed by the hardware output device as the PCM audio is not available to the framework.
| Parameters | |
|---|---|
| dualMonoMode | int: one ofDUAL_MONO_MODE_OFF,DUAL_MONO_MODE_LR,DUAL_MONO_MODE_LL,DUAL_MONO_MODE_RR.
 Value isDUAL_MONO_MODE_OFF,DUAL_MONO_MODE_LR,DUAL_MONO_MODE_LL, orDUAL_MONO_MODE_RR | 
| Returns | |
|---|---|
| boolean | true on success, false on failure if the output device does not support Dual Mono mode. | 
setLogSessionId
public void setLogSessionId (LogSessionId logSessionId)
Sets a LogSessionId instance to this AudioTrack for metrics collection.
| Parameters | |
|---|---|
| logSessionId | LogSessionId: aLogSessionIdinstance which is used to
        identify this object to the metrics service. Proper generated
        Ids must be obtained from the Java metrics service and should
        be considered opaque. UseLogSessionId.LOG_SESSION_ID_NONEto remove the
        logSessionId association.
 This value cannot benull. | 
| Throws | |
|---|---|
| IllegalStateException | if AudioTrack not initialized. | 
setLoopPoints
public int setLoopPoints (int startInFrames, 
                int endInFrames, 
                int loopCount)Sets the loop points and the loop count. The loop can be infinite.
 Similarly to setPlaybackHeadPosition,
 the track must be stopped or paused for the loop points to be changed,
 and must use the MODE_STATIC mode.
| Parameters | |
|---|---|
| startInFrames | int: loop start marker expressed in frames.
 Zero corresponds to start of buffer.
 The start marker must not be greater than or equal to the buffer size in frames, or negative.
 Value is 0 or greater | 
| endInFrames | int: loop end marker expressed in frames.
 The total buffer size in frames corresponds to end of buffer.
 The end marker must not be greater than the buffer size in frames.
 For looping, the end marker must not be less than or equal to the start marker,
 but to disable looping
 it is permitted for start marker, end marker, and loop count to all be 0.
 If any input parameters are out of range, this method returnsERROR_BAD_VALUE.
 If the loop period (endInFrames - startInFrames) is too small for the implementation to
 support,ERROR_BAD_VALUEis returned.
 The loop range is the interval [startInFrames, endInFrames).As of Build.VERSION_CODES.M, the position is left unchanged,
 unless it is greater than or equal to the loop end marker, in which case
 it is forced to the loop start marker.
 For earlier API levels, the effect on position is unspecified.
 Value is 0 or greater | 
| loopCount | int: the number of times the loop is looped; must be greater than or equal to -1.
    A value of -1 means infinite looping, and 0 disables looping.
    A value of positive N means to "loop" (go back) N times.  For example,
    a value of one means to play the region two times in total.
 Value is -1 or greater | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
setNotificationMarkerPosition
public int setNotificationMarkerPosition (int markerInFrames)
Sets the position of the notification marker. At most one marker can be active.
| Parameters | |
|---|---|
| markerInFrames | int: marker position in wrapping frame units similar togetPlaybackHeadPosition(), or zero to disable the marker.
 To set a marker at a position which would appear as zero due to wraparound,
 a workaround is to use a non-zero position near zero, such as -1 or 1. | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
setOffloadDelayPadding
public void setOffloadDelayPadding (int delayInFrames, 
                int paddingInFrames)Configures the delay and padding values for the current compressed stream playing
 in offload mode.
 This can only be used on a track successfully initialized with
 AudioTrack.Builder.setOffloadedPlayback(boolean). The unit is frames, where a
 frame indicates the number of samples per channel, e.g. 100 frames for a stereo compressed
 stream corresponds to 200 decoded interleaved PCM samples.
| Parameters | |
|---|---|
| delayInFrames | int: number of frames to be ignored at the beginning of the stream. A value
     of 0 indicates no delay is to be applied.
 Value is 0 or greater | 
| paddingInFrames | int: number of frames to be ignored at the end of the stream. A value of 0
     of 0 indicates no padding is to be applied.
 Value is 0 or greater | 
setOffloadEndOfStream
public void setOffloadEndOfStream ()
Declares that the last write() operation on this track provided the last buffer of this
 stream.
 After the end of stream, previously set padding and delay values are ignored.
 Can only be called only if the AudioTrack is opened in offload mode
 .
 Can only be called only if the AudioTrack is in state PLAYSTATE_PLAYING
 .
 Use this method in the same thread as any write() operation.
setPlaybackHeadPosition
public int setPlaybackHeadPosition (int positionInFrames)
Sets the playback head position within the static buffer.
 The track must be stopped or paused for the position to be changed,
 and must use the MODE_STATIC mode.
| Parameters | |
|---|---|
| positionInFrames | int: playback head position within buffer, expressed in frames.
 Zero corresponds to start of buffer.
 The position must not be greater than the buffer size in frames, or negative.
 Though this method andgetPlaybackHeadPosition()have similar names,
 the position values have different meanings.If looping is currently enabled and the new position is greater than or equal to the loop end marker, the behavior varies by API level: as of Build.VERSION_CODES.M,
 the looping is first disabled and then the position is set.
 For earlier API levels, the behavior is unspecified.
 Value is 0 or greater | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
setPlaybackParams
public void setPlaybackParams (PlaybackParams params)
Sets the playback parameters.
 This method returns failure if it cannot apply the playback parameters.
 One possible cause is that the parameters for speed or pitch are out of range.
 Another possible cause is that the AudioTrack is streaming
 (see MODE_STREAM) and the
 buffer size is too small. For speeds greater than 1.0f, the AudioTrack buffer
 on configuration must be larger than the speed multiplied by the minimum size
 getMinBufferSize(int, int, int)) to allow proper playback.
| Parameters | |
|---|---|
| params | PlaybackParams: seePlaybackParams. In particular,
 speed, pitch, and audio mode should be set.
 This value cannot benull. | 
| Throws | |
|---|---|
| IllegalArgumentException | if the parameters are invalid or not accepted. | 
| IllegalStateException | if track is not initialized. | 
setPlaybackPositionUpdateListener
public void setPlaybackPositionUpdateListener (AudioTrack.OnPlaybackPositionUpdateListener listener, Handler handler)
Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. Use this method to receive AudioTrack events in the Handler associated with another thread than the one in which you created the AudioTrack instance.
| Parameters | |
|---|---|
| handler | Handler: the Handler that will receive the event notification messages. | 
setPlaybackPositionUpdateListener
public void setPlaybackPositionUpdateListener (AudioTrack.OnPlaybackPositionUpdateListener listener)
Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. Notifications will be received in the same thread as the one in which the AudioTrack instance was created.
setPlaybackRate
public int setPlaybackRate (int sampleRateInHz)
Sets the playback sample rate for this track. This sets the sampling rate at which
 the audio data will be consumed and played back
 (as set by the sampleRateInHz parameter in the
 AudioTrack(int, int, int, int, int, int) constructor),
 not the original sampling rate of the
 content. For example, setting it to half the sample rate of the content will cause the
 playback to last twice as long, but will also result in a pitch shift down by one octave.
 The valid sample rate range is from 1 Hz to twice the value returned by
 getNativeOutputSampleRate(int).
 Use setPlaybackParams(android.media.PlaybackParams) for speed control.
 
 This method may also be used to repurpose an existing AudioTrack
 for playback of content of differing sample rate,
 but with identical encoding and channel mask.
| Parameters | |
|---|---|
| sampleRateInHz | int: the sample rate expressed in Hz | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
setPositionNotificationPeriod
public int setPositionNotificationPeriod (int periodInFrames)
Sets the period for the periodic notification event.
| Parameters | |
|---|---|
| periodInFrames | int: update period expressed in frames.
 Zero period means no position updates.  A negative period is not allowed. | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_INVALID_OPERATION | 
setPreferredDevice
public boolean setPreferredDevice (AudioDeviceInfo deviceInfo)
Specifies an audio device (via an AudioDeviceInfo object) to route
 the output from this AudioTrack.
| Parameters | |
|---|---|
| deviceInfo | AudioDeviceInfo: TheAudioDeviceInfospecifying the audio sink.
  If deviceInfo is null, default routing is restored. | 
| Returns | |
|---|---|
| boolean | true if succesful, false if the specified AudioDeviceInfois non-null and
 does not correspond to a valid audio output device. | 
setPresentation
public int setPresentation (AudioPresentation presentation)
Sets the audio presentation.
 If the audio presentation is invalid then ERROR_BAD_VALUE will be returned.
 If a multi-stream decoder (MSD) is not present, or the format does not support
 multiple presentations, then ERROR_INVALID_OPERATION will be returned.
 ERROR is returned in case of any other error.
| Parameters | |
|---|---|
| presentation | AudioPresentation: seeAudioPresentation. In particular, id should be set.
 This value cannot benull. | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR,ERROR_BAD_VALUE,ERROR_INVALID_OPERATION | 
| Throws | |
|---|---|
| IllegalArgumentException | if the audio presentation is null. | 
| IllegalStateException | if track is not initialized. | 
setStartThresholdInFrames
public int setStartThresholdInFrames (int startThresholdInFrames)
Sets the streaming start threshold for an AudioTrack.
 
 The streaming start threshold is the buffer level that the written audio
 data must reach for audio streaming to start after play() is called.
 
For compressed streams, the size of a frame is considered to be exactly one byte.
| Parameters | |
|---|---|
| startThresholdInFrames | int: the desired start threshold.
 Value is 1 or greater | 
| Returns | |
|---|---|
| int | the actual start threshold in frames value. This is
         an integer between 1 to the buffer capacity
         (see getBufferCapacityInFrames()),
         and might change if the output sink changes after track creation.
 Value is 1 or greater | 
| Throws | |
|---|---|
| IllegalStateException | if the track is not initialized or the
         track transfer mode is not MODE_STREAM. | 
| IllegalArgumentException | if startThresholdInFrames is not positive. | 
See also:
setStereoVolume
public int setStereoVolume (float leftGain, 
                float rightGain)
      This method was deprecated
      in API level 21.
    Applications should use setVolume(float) instead, as it
 more gracefully scales down to mono, and up to multi-channel content beyond stereo.
  
Sets the specified left and right output gain values on the AudioTrack.
Gain values are clamped to the closed interval [0.0, max] where
 max is the value of getMaxVolume().
 A value of 0.0 results in zero gain (silence), and
 a value of 1.0 means unity gain (signal unchanged).
 The default value is 1.0 meaning unity gain.
 
The word "volume" in the API name is historical; this is actually a linear gain.
| Parameters | |
|---|---|
| leftGain | float: output gain for the left channel. | 
| rightGain | float: output gain for the right channel | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_INVALID_OPERATION | 
setVolume
public int setVolume (float gain)
Sets the specified output gain value on all channels of this track.
Gain values are clamped to the closed interval [0.0, max] where
 max is the value of getMaxVolume().
 A value of 0.0 results in zero gain (silence), and
 a value of 1.0 means unity gain (signal unchanged).
 The default value is 1.0 meaning unity gain.
 
This API is preferred over setStereoVolume(float, float), as it
 more gracefully scales down to mono, and up to multi-channel content beyond stereo.
 
The word "volume" in the API name is historical; this is actually a linear gain.
| Parameters | |
|---|---|
| gain | float: output gain for all channels. | 
| Returns | |
|---|---|
| int | error code or success, see SUCCESS,ERROR_INVALID_OPERATION | 
stop
public void stop ()
Stops playing the audio data.
 When used on an instance created in MODE_STREAM mode, audio will stop playing
 after the last buffer that was written has been played. For an immediate stop, use
 pause(), followed by flush() to discard audio data that hasn't been played
 back yet.
| Throws | |
|---|---|
|  | java.lang.IllegalStateException | 
| IllegalStateException | |
unregisterStreamEventCallback
public void unregisterStreamEventCallback (AudioTrack.StreamEventCallback eventCallback)
Unregisters the callback for notification of stream events, previously registered
 with registerStreamEventCallback(java.util.concurrent.Executor, android.media.AudioTrack.StreamEventCallback).
| Parameters | |
|---|---|
| eventCallback | AudioTrack.StreamEventCallback: the callback to unregister.
 This value cannot benull. | 
write
public int write (float[] audioData, 
                int offsetInFloats, 
                int sizeInFloats, 
                int writeMode)Writes the audio data to the audio sink for playback (streaming mode),
 or copies audio data for later playback (static buffer mode).
 The format specified in the AudioTrack constructor should be
 AudioFormat.ENCODING_PCM_FLOAT to correspond to the data in the array.
 
 In streaming mode, the blocking behavior depends on the write mode.  If the write mode is
 WRITE_BLOCKING, the write will normally block until all the data has been enqueued
 for playback, and will return a full transfer count.  However, if the write mode is
 WRITE_NON_BLOCKING, or the track is stopped or paused on entry, or another thread
 interrupts the write by calling stop or pause, or an I/O error
 occurs during the write, then the write may return a short transfer count.
 
In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | float: the array that holds the data to write.
     The implementation does not clip for sample values within the nominal range
     [-1.0f, 1.0f], provided that all gains in the audio pipeline are
     less than or equal to unity (1.0f), and in the absence of post-processing effects
     that could add energy, such as reverb.  For the convenience of applications
     that compute samples using filters with non-unity gain,
     sample values +3 dB beyond the nominal range are permitted.
     However such values may eventually be limited or clipped, depending on various gains
     and later processing in the audio path.  Therefore applications are encouraged
     to provide samples values within the nominal range.
 This value cannot benull. | 
| offsetInFloats | int: the offset, expressed as a number of floats,
     in audioData where the data to write starts.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| sizeInFloats | int: the number of floats to write in audioData after the offset.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| writeMode | int: one ofWRITE_BLOCKING,WRITE_NON_BLOCKING. It has no
     effect in static mode.With WRITE_BLOCKING, the write will block until all data has been written
         to the audio sink.With WRITE_NON_BLOCKING, the write will return immediately after
     queuing as much audio data for playback as possible without blocking.
 Value isWRITE_BLOCKING, orWRITE_NON_BLOCKING | 
| Returns | |
|---|---|
| int | zero or the positive number of floats that were written, or one of the following
    error codes. The number of floats will be a multiple of the channel count not to
    exceed sizeInFloats. 
 | 
write
public int write (short[] audioData, 
                int offsetInShorts, 
                int sizeInShorts)Writes the audio data to the audio sink for playback (streaming mode),
 or copies audio data for later playback (static buffer mode).
 The format specified in the AudioTrack constructor should be
 AudioFormat.ENCODING_PCM_16BIT to correspond to the data in the array.
 
In streaming mode, the write will normally block until all the data has been enqueued for playback, and will return a full transfer count. However, if the track is stopped or paused on entry, or another thread interrupts the write by calling stop or pause, or an I/O error occurs during the write, then the write may return a short transfer count.
In static buffer mode, copies the data to the buffer starting at offset 0. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | short: the array that holds the data to play.
 This value cannot benull. | 
| offsetInShorts | int: the offset expressed in shorts in audioData where the data to play
     starts.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| sizeInShorts | int: the number of shorts to read in audioData after the offset.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| Returns | |
|---|---|
| int | zero or the positive number of shorts that were written, or one of the following
    error codes. The number of shorts will be a multiple of the channel count not to
    exceed sizeInShorts. 
 write(short[], int, int, int)withwriteModeset toWRITE_BLOCKING. | 
write
public int write (byte[] audioData, 
                int offsetInBytes, 
                int sizeInBytes)Writes the audio data to the audio sink for playback (streaming mode),
 or copies audio data for later playback (static buffer mode).
 The format specified in the AudioTrack constructor should be
 AudioFormat.ENCODING_PCM_8BIT to correspond to the data in the array.
 The format can be AudioFormat.ENCODING_PCM_16BIT, but this is deprecated.
 
In streaming mode, the write will normally block until all the data has been enqueued for playback, and will return a full transfer count. However, if the track is stopped or paused on entry, or another thread interrupts the write by calling stop or pause, or an I/O error occurs during the write, then the write may return a short transfer count.
In static buffer mode, copies the data to the buffer starting at offset 0. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | byte: the array that holds the data to play.
 This value cannot benull. | 
| offsetInBytes | int: the offset expressed in bytes in audioData where the data to write
    starts.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| sizeInBytes | int: the number of bytes to write in audioData after the offset.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| Returns | |
|---|---|
| int | zero or the positive number of bytes that were written, or one of the following
    error codes. The number of bytes will be a multiple of the frame size in bytes
    not to exceed sizeInBytes. 
 write(byte[], int, int, int)withwriteModeset toWRITE_BLOCKING. | 
write
public int write (ByteBuffer audioData, int sizeInBytes, int writeMode)
Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). The audioData in ByteBuffer should match the format specified in the AudioTrack constructor.
 In streaming mode, the blocking behavior depends on the write mode.  If the write mode is
 WRITE_BLOCKING, the write will normally block until all the data has been enqueued
 for playback, and will return a full transfer count.  However, if the write mode is
 WRITE_NON_BLOCKING, or the track is stopped or paused on entry, or another thread
 interrupts the write by calling stop or pause, or an I/O error
 occurs during the write, then the write may return a short transfer count.
 
In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | ByteBuffer: the buffer that holds the data to write, starting at the position reported
     byaudioData.position().Note that upon return, the buffer position ( audioData.position()) will
     have been advanced to reflect the amount of data that was successfully written to
     the AudioTrack.
 This value cannot benull. | 
| sizeInBytes | int: number of bytes to write.  It is recommended but not enforced
     that the number of bytes requested be a multiple of the frame size (sample size in
     bytes multiplied by the channel count).Note this may differ from audioData.remaining(), but cannot exceed it. | 
| writeMode | int: one ofWRITE_BLOCKING,WRITE_NON_BLOCKING. It has no
     effect in static mode.With WRITE_BLOCKING, the write will block until all data has been written
         to the audio sink.With WRITE_NON_BLOCKING, the write will return immediately after
     queuing as much audio data for playback as possible without blocking.
 Value isWRITE_BLOCKING, orWRITE_NON_BLOCKING | 
| Returns | |
|---|---|
| int | zero or the positive number of bytes that were written, or one of the following
    error codes. 
 | 
write
public int write (ByteBuffer audioData, int sizeInBytes, int writeMode, long timestamp)
Writes the audio data to the audio sink for playback in streaming mode on a HW_AV_SYNC track. The blocking behavior will depend on the write mode.
| Parameters | |
|---|---|
| audioData | ByteBuffer: the buffer that holds the data to write, starting at the position reported
     byaudioData.position().Note that upon return, the buffer position ( audioData.position()) will
     have been advanced to reflect the amount of data that was successfully written to
     the AudioTrack.
 This value cannot benull. | 
| sizeInBytes | int: number of bytes to write.  It is recommended but not enforced
     that the number of bytes requested be a multiple of the frame size (sample size in
     bytes multiplied by the channel count).Note this may differ from audioData.remaining(), but cannot exceed it. | 
| writeMode | int: one ofWRITE_BLOCKING,WRITE_NON_BLOCKING.With WRITE_BLOCKING, the write will block until all data has been written
         to the audio sink.With WRITE_NON_BLOCKING, the write will return immediately after
     queuing as much audio data for playback as possible without blocking.
 Value isWRITE_BLOCKING, orWRITE_NON_BLOCKING | 
| timestamp | long: The timestamp, in nanoseconds, of the first decodable audio frame in the
     provided audioData. | 
| Returns | |
|---|---|
| int | zero or the positive number of bytes that were written, or one of the following
    error codes. 
 | 
write
public int write (short[] audioData, 
                int offsetInShorts, 
                int sizeInShorts, 
                int writeMode)Writes the audio data to the audio sink for playback (streaming mode),
 or copies audio data for later playback (static buffer mode).
 The format specified in the AudioTrack constructor should be
 AudioFormat.ENCODING_PCM_16BIT to correspond to the data in the array.
 
 In streaming mode, the blocking behavior depends on the write mode.  If the write mode is
 WRITE_BLOCKING, the write will normally block until all the data has been enqueued
 for playback, and will return a full transfer count.  However, if the write mode is
 WRITE_NON_BLOCKING, or the track is stopped or paused on entry, or another thread
 interrupts the write by calling stop or pause, or an I/O error
 occurs during the write, then the write may return a short transfer count.
 
In static buffer mode, copies the data to the buffer starting at offset 0. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | short: the array that holds the data to write.
 This value cannot benull. | 
| offsetInShorts | int: the offset expressed in shorts in audioData where the data to write
     starts.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| sizeInShorts | int: the number of shorts to read in audioData after the offset.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| writeMode | int: one ofWRITE_BLOCKING,WRITE_NON_BLOCKING. It has no
     effect in static mode.With WRITE_BLOCKING, the write will block until all data has been written
         to the audio sink.With WRITE_NON_BLOCKING, the write will return immediately after
     queuing as much audio data for playback as possible without blocking.
 Value isWRITE_BLOCKING, orWRITE_NON_BLOCKING | 
| Returns | |
|---|---|
| int | zero or the positive number of shorts that were written, or one of the following
    error codes. The number of shorts will be a multiple of the channel count not to
    exceed sizeInShorts. 
 | 
write
public int write (byte[] audioData, 
                int offsetInBytes, 
                int sizeInBytes, 
                int writeMode)Writes the audio data to the audio sink for playback (streaming mode),
 or copies audio data for later playback (static buffer mode).
 The format specified in the AudioTrack constructor should be
 AudioFormat.ENCODING_PCM_8BIT to correspond to the data in the array.
 The format can be AudioFormat.ENCODING_PCM_16BIT, but this is deprecated.
 
 In streaming mode, the blocking behavior depends on the write mode.  If the write mode is
 WRITE_BLOCKING, the write will normally block until all the data has been enqueued
 for playback, and will return a full transfer count.  However, if the write mode is
 WRITE_NON_BLOCKING, or the track is stopped or paused on entry, or another thread
 interrupts the write by calling stop or pause, or an I/O error
 occurs during the write, then the write may return a short transfer count.
 
In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. Note that the actual playback of this data might occur after this function returns.
| Parameters | |
|---|---|
| audioData | byte: the array that holds the data to play.
 This value cannot benull. | 
| offsetInBytes | int: the offset expressed in bytes in audioData where the data to write
    starts.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| sizeInBytes | int: the number of bytes to write in audioData after the offset.
    Must not be negative, or cause the data access to go out of bounds of the array. | 
| writeMode | int: one ofWRITE_BLOCKING,WRITE_NON_BLOCKING. It has no
     effect in static mode.With WRITE_BLOCKING, the write will block until all data has been written
         to the audio sink.With WRITE_NON_BLOCKING, the write will return immediately after
     queuing as much audio data for playback as possible without blocking.
 Value isWRITE_BLOCKING, orWRITE_NON_BLOCKING | 
| Returns | |
|---|---|
| int | zero or the positive number of bytes that were written, or one of the following
    error codes. The number of bytes will be a multiple of the frame size in bytes
    not to exceed sizeInBytes. 
 | 
Protected methods
finalize
protected void finalize ()
Called by the garbage collector on an object when garbage collection
 determines that there are no more references to the object.
 A subclass overrides the finalize method to dispose of
 system resources or to perform other cleanup.
 
 The general contract of finalize is that it is invoked
 if and when the Java virtual
 machine has determined that there is no longer any
 means by which this object can be accessed by any thread that has
 not yet died, except as a result of an action taken by the
 finalization of some other object or class which is ready to be
 finalized. The finalize method may take any action, including
 making this object available again to other threads; the usual purpose
 of finalize, however, is to perform cleanup actions before
 the object is irrevocably discarded. For example, the finalize method
 for an object that represents an input/output connection might perform
 explicit I/O transactions to break the connection before the object is
 permanently discarded.
 
 The finalize method of class Object performs no
 special action; it simply returns normally. Subclasses of
 Object may override this definition.
 
 The Java programming language does not guarantee which thread will
 invoke the finalize method for any given object. It is
 guaranteed, however, that the thread that invokes finalize will not
 be holding any user-visible synchronization locks when finalize is
 invoked. If an uncaught exception is thrown by the finalize method,
 the exception is ignored and finalization of that object terminates.
 
 After the finalize method has been invoked for an object, no
 further action is taken until the Java virtual machine has again
 determined that there is no longer any means by which this object can
 be accessed by any thread that has not yet died, including possible
 actions by other objects or classes which are ready to be finalized,
 at which point the object may be discarded.
 
 The finalize method is never invoked more than once by a Java
 virtual machine for any given object.
 
 Any exception thrown by the finalize method causes
 the finalization of this object to be halted, but is otherwise
 ignored.
getNativeFrameCount
protected int getNativeFrameCount ()
      This method was deprecated
      in API level 19.
    Use the identical public method getBufferSizeInFrames() instead.
  
Returns the frame count of the native AudioTrack buffer.
| Returns | |
|---|---|
| int | current size in frames of the AudioTrackbuffer. | 
| Throws | |
|---|---|
|  | java.lang.IllegalStateException | 
setState
protected void setState (int state)
      This method was deprecated
      in API level 19.
    Only accessible by subclasses, which are not recommended for AudioTrack.
  
Sets the initialization state of the instance. This method was originally intended to be used in an AudioTrack subclass constructor to set a subclass-specific post-initialization state. However, subclasses of AudioTrack are no longer recommended, so this method is obsolete.
| Parameters | |
|---|---|
| state | int: the state of the AudioTrack instance | 
Content and code samples on this page are subject to the licenses described in the Content License. Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.
Last updated 2025-08-20 UTC.
