Im developing a DTMF decoder. What I need is to record a voice call and then extract the frecuency range. Everything is working ok but there are some android versions in which I get the following error when I set up the audio source
"Invalid capture preset 3 for AudioAttributes"
In order to get the right parameters I have developed an algorithm:
private static final int[] FREQUENCY = {8000, 11025, 16000, 22050, 44100}; // 44100 is guaranteed to work in all devices
private static final int[] CHANNEL_CONFIGURATION = {AudioFormat.CHANNEL_IN_MONO,
AudioFormat.CHANNEL_IN_STEREO};
private static final int[] AUDIO_ENCODING = {AudioFormat.ENCODING_DEFAULT,
AudioFormat.ENCODING_PCM_8BIT,
AudioFormat.ENCODING_PCM_16BIT};
for (int i = 0; i < FREQUENCY.length && !found; i ++) {
for (int j = 0; j < CHANNEL_CONFIGURATION.length && !found; j ++) {
for (int k = 0; k < AUDIO_ENCODING.length && !found; k ++) {
try {
bufferSize = AudioRecord.getMinBufferSize(FREQUENCY[i], CHANNEL_CONFIGURATION[j], AUDIO_ENCODING[k]);
if (bufferSize != AudioRecord.ERROR_BAD_VALUE && bufferSize != AudioRecord.ERROR) {
audioRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, FREQUENCY[i], CHANNEL_CONFIGURATION[j], AUDIO_ENCODING[k], bufferSize);
found = true;
}
} catch (Exception e) {
Log.e(TAG, e.toString());
}
}
}
}
There are no correct parameters found for api 19 or 22 in order to set up an AudioRecord. In every case an exception is raised.
I'm quite locked with this. Im not thinking about to use MediaRecoder class because I can not read a buffer directly from the recoder and this is critical for the dtmf decoding proccess. I have also seen some dtmf open source decoder but all of them have this problem
Conclusion
Android official BUG
first
AudioRecord.java It's constructor public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes) may NOT recommend for use(I think), an IllegalArgumentException was throwed directly,another constructor metho like below(especially CANDIDATE FOR PUBLIC API):
/**
* #hide
* CANDIDATE FOR PUBLIC API
* Class constructor with {#link AudioAttributes} and {#link AudioFormat}.
* #param attributes a non-null {#link AudioAttributes} instance. Use
* {#link AudioAttributes.Builder#setCapturePreset(int)} for configuring the capture
* preset for this instance.
* #param format a non-null {#link AudioFormat} instance describing the format of the data
* that will be recorded through this AudioRecord. See {#link AudioFormat.Builder} for
* configuring the audio format parameters such as encoding, channel mask and sample rate.
* #param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is written
* to during the recording. New audio data can be read from this buffer in smaller chunks
* than this size. See {#link #getMinBufferSize(int, int, int)} to determine the minimum
* required buffer size for the successful creation of an AudioRecord instance. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
* #param sessionId ID of audio session the AudioRecord must be attached to, or
* {#link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
* time. See also {#link AudioManager#generateAudioSessionId()} to obtain a session ID before
* construction.
* #throws IllegalArgumentException
*/
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,int sessionId) throws IllegalArgumentException {
}
second
you can try
/** Voice call uplink + downlink audio source */
public static final int VOICE_CALL = 4;
third
/**
* #hide
* Sets the capture preset.
* Use this audio attributes configuration method when building an {#link AudioRecord}
* instance with {#link AudioRecord#AudioRecord(AudioAttributes, AudioFormat, int)}.
* #param preset one of {#link MediaRecorder.AudioSource#DEFAULT},
* {#link MediaRecorder.AudioSource#MIC}, {#link MediaRecorder.AudioSource#CAMCORDER},
* {#link MediaRecorder.AudioSource#VOICE_RECOGNITION} or
* {#link MediaRecorder.AudioSource#VOICE_COMMUNICATION}.
* #return the same Builder instance.
*/
#SystemApi
public Builder setCapturePreset(int preset) {
//....
Log.e(TAG, "Invalid capture preset " + preset + " for AudioAttributes");
}
Reference resources
AudioRecord.java
AudioAttributes.java
AudioAttributes.java
#SystemApi #hide
https://code.google.com/p/android/issues/detail?id=2117&q=call%20recorder&colspec=ID%20Type%20Status%20Owner%20Summary%20Stars
https://code.google.com/p/android/issues/detail?id=4075
Some android versions have this feature disabled. If you have the android source code you can make it works. Im currently working with cyanogenmod so I have customized the AudioAttributes.java class in order to not to raise an exception when it happens.
We only have to changhe the setCapturePreset() method in AudioAttributes.java by adding all Audio sources that we want within the switch/case structure.
This is the original:
/**
* #hide
* Sets the capture preset.
* Use this audio attributes configuration method when building an {#link AudioRecord}
* instance with {#link AudioRecord#AudioRecord(AudioAttributes, AudioFormat, int)}.
* #param preset one of {#link MediaRecorder.AudioSource#DEFAULT},
* {#link MediaRecorder.AudioSource#MIC}, {#link MediaRecorder.AudioSource#CAMCORDER},
* {#link MediaRecorder.AudioSource#VOICE_RECOGNITION} or
* {#link MediaRecorder.AudioSource#VOICE_COMMUNICATION}.
* #return the same Builder instance.
*/
#SystemApi
public Builder setCapturePreset(int preset) {
switch (preset) {
case MediaRecorder.AudioSource.DEFAULT:
case MediaRecorder.AudioSource.MIC:
case MediaRecorder.AudioSource.CAMCORDER:
case MediaRecorder.AudioSource.VOICE_RECOGNITION:
case MediaRecorder.AudioSource.VOICE_COMMUNICATION:
mSource = preset;
break;
default:
Log.e(TAG, "Invalid capture preset " + preset + " for AudioAttributes");
}
return this;
}
And I replaced with this:
/**
* #hide
* Sets the capture preset.
* Use this audio attributes configuration method when building an {#link AudioRecord}
* instance with {#link AudioRecord#AudioRecord(AudioAttributes, AudioFormat, int)}.
* #param preset one of {#link MediaRecorder.AudioSource#DEFAULT},
* {#link MediaRecorder.AudioSource#MIC}, {#link MediaRecorder.AudioSource#CAMCORDER},
* {#link MediaRecorder.AudioSource#VOICE_RECOGNITION} or
* {#link MediaRecorder.AudioSource#VOICE_COMMUNICATION}.
* #return the same Builder instance.
*/
#SystemApi
public Builder setCapturePreset(int preset) {
switch (preset) {
case MediaRecorder.AudioSource.DEFAULT:
case MediaRecorder.AudioSource.MIC:
case MediaRecorder.AudioSource.CAMCORDER:
case MediaRecorder.AudioSource.VOICE_RECOGNITION:
case MediaRecorder.AudioSource.VOICE_COMMUNICATION:
case MediaRecorder.AudioSource.VOICE_DOWNLINK:
case MediaRecorder.AudioSource.VOICE_UPLINK:
case MediaRecorder.AudioSource.VOICE_CALL:
mSource = preset;
break;
default:
Log.e(TAG, "Invalid capture preset " + preset + " for AudioAttributes");
}
return this;
}
Related
I'm having trouble using NFC on Android. I want to send some data to M1 card. I can send commands using the transceive() method. But the transceive() method has a byte[] argument, and a byte is 8 bit. I want to send 0b1000000 (short frame) to my M1 card. How send that to M1 card?
/**
* Send raw NfcA data to a tag and receive the response.
*
* <p>This is equivalent to connecting to this tag via {#link NfcA}
* and calling {#link NfcA#transceive}. Note that all MIFARE Classic
* tags are based on {#link NfcA} technology.
*
* <p>Use {#link #getMaxTransceiveLength} to retrieve the maximum number of bytes
* that can be sent with {#link #transceive}.
*
* <p>This is an I/O operation and will block until complete. It must
* not be called from the main application thread. A blocked call will be canceled with
* {#link IOException} if {#link #close} is called from another thread.
*
* <p class="note">Requires the {#link android.Manifest.permission#NFC} permission.
*
* #see NfcA#transceive
*/
public byte[] transceive(byte[] data) throws IOException {
return transceive(data, true);
}
My code is:
/**
* Write a block of 16 byte data to tag.
* #param sectorIndex The sector to where the data should be written
* #param blockIndex The block to where the data should be written
* #param data 16 byte of data.
* #param key The MIFARE Classic key for the given sector.
* #param useAsKeyB If true, key will be treated as key B
* for authentication.
* #return The return codes are:<br />
* <ul>
* <li>0 - Everything went fine.</li>
* <li>1 - Sector index is out of range.</li>
* <li>2 - Block index is out of range.</li>
* <li>3 - Data are not 16 bytes.</li>
* <li>4 - Authentication went wrong.</li>
* <li>-1 - Error while writing to tag.</li>
* </ul>
* #see #authenticate(int, byte[], boolean)
*/
public int writeBlock(int sectorIndex, int blockIndex, byte[] data,
byte[] key, boolean useAsKeyB) {
if (getSectorCount()-1 < sectorIndex) {
return 1;
}
if (mMFC.getBlockCountInSector(sectorIndex)-1 < blockIndex) {
return 2;
}
if (data.length != 16) {
return 3;
}
int block = mMFC.sectorToBlock(sectorIndex) + blockIndex;
// write chinese card
if (block == 0) {
// Write block.
try {
mMFC.transceive(new byte[]{(byte)0x50, (byte)0x00, (byte)0x57, (byte)0xCD});
mMFC.transceive(new byte[]{(byte)0x40}); // TODO: here need send 0b1000000(7 bit) , not 0b01000000(8 bit)
mMFC.transceive(new byte[]{(byte)0x43});
mMFC.writeBlock(block, data);
} catch (IOException e) {
Log.e(LOG_TAG, "Error while writing block to tag.", e);
return -1;
}
return 0;
}
if (!authenticate(sectorIndex, key, useAsKeyB)) {
return 4;
}
// Write block.
try {
mMFC.writeBlock(block, data);
} catch (IOException e) {
Log.e(LOG_TAG, "Error while writing block to tag.", e);
return -1;
}
return 0;
}
What you are trying to do is simply not possible on Android. The Android NFC API does not provide any method to send short frames (7 bit frame as defined in ISO/IEC 14443-3 Type A) directly.
In fact, the transceive() method of NfcA does not only send the byte array that is passed as its argument. Instead, it will also cause the CRC checksum to be automatically appended to the frame. As a consequence, you can only exchange normal frames (as defined in ISO/IEC 14443-3 Type A) using the transceive() method.
Since you are using MIFARE Classic (or rather some UID-changable clones), you will experience even further limitations on Android devices that support MIFARE Classic (i.e. devices with an NFC chipset manufactured by NXP): If a tag is detected as MIFARE Classic, the NFC controller will be switched into a special mode where it interprets high-level versions of the MIFARE Classic commands and automatically translates them to their low-level (and potentially encrypted) versions. This is necessary since MIFARE Classic does not fully follow the framing rules of ISO/IEC 14443-3 Type A. Unfortunately, this also typically prevents you from sending any raw frames directly to these tags at all.
I have an implementation of 'AES' encryption and decryption with 'CBC' mode and 'PKCS5Padding' padding in Kotlin. I noticed that while decrypting cipherInputStream.read(buffer) reads only 512 bytes at a time instead of the full buffer size which is 8192 bytes. Why is that? While encrypting it uses whole buffer.
These are the constants that I am using,
private val TRANSFORMATION = "AES/CBC/PKCS5Padding"
private var SECRET_KEY_FAC_ALGORITHM = "PBKDF2WithHmacSHA1"
private val SECRET_KEY_SPEC_ALGORITHM = "AES"
private val cipher = Cipher.getInstance(TRANSFORMATION)
private val random = SecureRandom()
private val KEY_BITS_LENGTH = 256
private val IV_BYTES_LENGTH = cipher.blockSize
private val SALT_BYTES_LENGTH = KEY_BITS_LENGTH / 8
private val ITERATIONS = 10000
Decryption code
cis = CipherInputStream(input, cipher)
val buffer = ByteArray(8192)
var read = cis.read(buffer)
while (read > -1) {
fos.write(buffer, 0, read)
read = cis.read(buffer)
}
Encryption code
fos.write(iv)
fos.write(salt)
cos = CipherOutputStream(fos, cipher)
val buffer = ByteArray(8192)
var read = input.read(buffer)
while (read > -1) {
cos.write(buffer, 0, read)
read = input.read(buffer)
}
Recently I had a similar issue.
The problem was internal buffer of CipherInputStream class which is defined as follows
private byte[] ibuffer = new byte[512];
What significantly improved decryption speed was increasing this buffer size to 8192. So I've just copy pasted original CipherInputStream class to my own class and modified buffer size.
What is funny is the comment above this ibuffer field.
the size 512 bytes is somewhat randomly chosen */
Hope it helped
I just implemented the class by changing the size of the length of ibuffer. (Copy paste with the changed value only)
import java.io.IOException;
import java.io.InputStream;
import javax.crypto.AEADBadTagException;
import javax.crypto.BadPaddingException;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.IllegalBlockSizeException;
import javax.crypto.NullCipher;
import javax.crypto.ShortBufferException;
public class FasterCipherInputStream extends CipherInputStream {
private static final String TAG = "FasterCipherInputStream";
private static final int BUFFER_SIZE = 20971520;
// the cipher engine to use to process stream data
private final Cipher cipher;
// the underlying input stream
private final InputStream input;
/* the buffer holding data that have been read in from the
underlying stream, but have not been processed by the cipher
engine. the size 512 bytes is somewhat randomly chosen */
private final byte[] ibuffer = new byte[BUFFER_SIZE];
// having reached the end of the underlying input stream
private boolean done = false;
/* the buffer holding data that have been processed by the cipher
engine, but have not been read out */
private byte[] obuffer;
// the offset pointing to the next "new" byte
private int ostart = 0;
// the offset pointing to the last "new" byte
private int ofinish = 0;
// stream status
private boolean closed = false;
/**
* private convenience function.
*
* Entry condition: ostart = ofinish
*
* Exit condition: ostart <= ofinish
*
* return (ofinish-ostart) (we have this many bytes for you)
* return 0 (no data now, but could have more later)
* return -1 (absolutely no more data)
*
* Note: Exceptions are only thrown after the stream is completely read.
* For AEAD ciphers a read() of any length will internally cause the
* whole stream to be read fully and verify the authentication tag before
* returning decrypted data or exceptions.
*/
private int getMoreData() throws IOException {
// Android-changed: The method was creating a new object every time update(byte[], int, int)
// or doFinal() was called resulting in the old object being GCed. With do(byte[], int) and
// update(byte[], int, int, byte[], int), we use already initialized obuffer.
if (done) return -1;
ofinish = 0;
ostart = 0;
int expectedOutputSize = cipher.getOutputSize(ibuffer.length);
if (obuffer == null || expectedOutputSize > obuffer.length) {
obuffer = new byte[expectedOutputSize];
}
int readin = input.read(ibuffer);
if (readin == -1) {
done = true;
try {
// doFinal resets the cipher and it is the final call that is made. If there isn't
// any more byte available, it returns 0. In case of any exception is raised,
// obuffer will get reset and therefore, it is equivalent to no bytes returned.
ofinish = cipher.doFinal(obuffer, 0);
} catch (IllegalBlockSizeException | BadPaddingException e) {
obuffer = null;
throw new IOException(e);
} catch (ShortBufferException e) {
obuffer = null;
throw new IllegalStateException("ShortBufferException is not expected", e);
}
} else {
// update returns number of bytes stored in obuffer.
try {
ofinish = cipher.update(ibuffer, 0, readin, obuffer, 0);
} catch (IllegalStateException e) {
obuffer = null;
throw e;
} catch (ShortBufferException e) {
// Should not reset the value of ofinish as the cipher is still not invalidated.
obuffer = null;
throw new IllegalStateException("ShortBufferException is not expected", e);
}
}
return ofinish;
}
/**
* Constructs a CipherInputStream from an InputStream and a
* Cipher.
* <br>Note: if the specified input stream or cipher is
* null, a NullPointerException may be thrown later when
* they are used.
* #param is the to-be-processed input stream
* #param c an initialized Cipher object
*/
public FasterCipherInputStream(InputStream is, Cipher c) {
super(is);
input = is;
cipher = c;
}
/**
* Constructs a CipherInputStream from an InputStream without
* specifying a Cipher. This has the effect of constructing a
* CipherInputStream using a NullCipher.
* <br>Note: if the specified input stream is null, a
* NullPointerException may be thrown later when it is used.
* #param is the to-be-processed input stream
*/
protected FasterCipherInputStream(InputStream is) {
super(is);
input = is;
cipher = new NullCipher();
}
/**
* Reads the next byte of data from this input stream. The value
* byte is returned as an <code>int</code> in the range
* <code>0</code> to <code>255</code>. If no byte is available
* because the end of the stream has been reached, the value
* <code>-1</code> is returned. This method blocks until input data
* is available, the end of the stream is detected, or an exception
* is thrown.
* <p>
*
* #return the next byte of data, or <code>-1</code> if the end of the
* stream is reached.
* #exception IOException if an I/O error occurs.
* #since JCE1.2
*/
public int read() throws IOException {
if (ostart >= ofinish) {
// we loop for new data as the spec says we are blocking
int i = 0;
while (i == 0) i = getMoreData();
if (i == -1) return -1;
}
return ((int) obuffer[ostart++] & 0xff);
};
/**
* Reads up to <code>b.length</code> bytes of data from this input
* stream into an array of bytes.
* <p>
* The <code>read</code> method of <code>InputStream</code> calls
* the <code>read</code> method of three arguments with the arguments
* <code>b</code>, <code>0</code>, and <code>b.length</code>.
*
* #param b the buffer into which the data is read.
* #return the total number of bytes read into the buffer, or
* <code>-1</code> is there is no more data because the end of
* the stream has been reached.
* #exception IOException if an I/O error occurs.
* #see java.io.InputStream#read(byte[], int, int)
* #since JCE1.2
*/
public int read(byte b[]) throws IOException {
return read(b, 0, b.length);
}
/**
* Reads up to <code>len</code> bytes of data from this input stream
* into an array of bytes. This method blocks until some input is
* available. If the first argument is <code>null,</code> up to
* <code>len</code> bytes are read and discarded.
*
* #param b the buffer into which the data is read.
* #param off the start offset in the destination array
* <code>buf</code>
* #param len the maximum number of bytes read.
* #return the total number of bytes read into the buffer, or
* <code>-1</code> if there is no more data because the end of
* the stream has been reached.
* #exception IOException if an I/O error occurs.
* #see java.io.InputStream#read()
* #since JCE1.2
*/
public int read(byte b[], int off, int len) throws IOException {
if (ostart >= ofinish) {
// we loop for new data as the spec says we are blocking
int i = 0;
while (i == 0) i = getMoreData();
if (i == -1) return -1;
}
if (len <= 0) {
return 0;
}
int available = ofinish - ostart;
if (len < available) available = len;
if (b != null) {
System.arraycopy(obuffer, ostart, b, off, available);
}
ostart = ostart + available;
return available;
}
/**
* Skips <code>n</code> bytes of input from the bytes that can be read
* from this input stream without blocking.
*
* <p>Fewer bytes than requested might be skipped.
* The actual number of bytes skipped is equal to <code>n</code> or
* the result of a call to
* {#link #available() available},
* whichever is smaller.
* If <code>n</code> is less than zero, no bytes are skipped.
*
* <p>The actual number of bytes skipped is returned.
*
* #param n the number of bytes to be skipped.
* #return the actual number of bytes skipped.
* #exception IOException if an I/O error occurs.
* #since JCE1.2
*/
public long skip(long n) throws IOException {
int available = ofinish - ostart;
if (n > available) {
n = available;
}
if (n < 0) {
return 0;
}
ostart += n;
return n;
}
/**
* Returns the number of bytes that can be read from this input
* stream without blocking. The <code>available</code> method of
* <code>InputStream</code> returns <code>0</code>. This method
* <B>should</B> be overridden by subclasses.
*
* #return the number of bytes that can be read from this input stream
* without blocking.
* #exception IOException if an I/O error occurs.
* #since JCE1.2
*/
public int available() throws IOException {
return (ofinish - ostart);
}
/**
* Closes this input stream and releases any system resources
* associated with the stream.
* <p>
* The <code>close</code> method of <code>CipherInputStream</code>
* calls the <code>close</code> method of its underlying input
* stream.
*
* #exception IOException if an I/O error occurs.
* #since JCE1.2
*/
public void close() throws IOException {
if (closed) {
return;
}
closed = true;
input.close();
// Android-removed: Removed a now-inaccurate comment
if (!done) {
try {
cipher.doFinal();
}
catch (BadPaddingException | IllegalBlockSizeException ex) {
// Android-changed: Added throw if bad tag is seen. See b/31590622.
if (ex instanceof AEADBadTagException) {
throw new IOException(ex);
}
}
}
ostart = 0;
ofinish = 0;
}
/**
* Tests if this input stream supports the <code>mark</code>
* and <code>reset</code> methods, which it does not.
*
* #return <code>false</code>, since this class does not support the
* <code>mark</code> and <code>reset</code> methods.
* #see java.io.InputStream#mark(int)
* #see java.io.InputStream#reset()
* #since JCE1.2
*/
public boolean markSupported() {
return false;
}
}
It worked fine for my case while decrypting a file over 30 MB. Hope someone can find some flaws though worked really well for my case.
Edit: Sorry somehow I missed that the above answer says the same. Keeping it for others in case they just need to copy from somewhere. Thanks.
whatever resources that I have found on internet, initialize from creating a native activity and providing android_app->window for creating vkAndroidSurfaceKHR. So, I just want to know can we have a window manager which supplies this window for surface creation.
To create a vkAndroidSurfaceKHR from a simple Java app, you get your instance of android.view.View and perform a native call to ANativeWindow_fromSurface(env, win).
Note View and its subclasses are able to draw 3D content from GPU, as OpenGL and Vulkan.
I did this way in my api around line 9100,
/**
* Get display handles for Android and AWT Canvas
* #param win - a java.awt.Canvas instance or a android.view.Surface
* #param displayHandles - return native surface handle
*
* #return true if all goes Ok.
*/
protected static native boolean getDisplayHandles0(Object win, long[] displayHandles);/*
#ifdef VK_USE_PLATFORM_ANDROID_KHR
ANativeWindow* window;
// Return the ANativeWindow associated with a Java Surface object,
// for interacting with it through native code. This acquires a reference
// on the ANativeWindow that is returned; be sure to use ANativeWindow_release()
// when done with it so that it doesn't leak.
window = ANativeWindow_fromSurface(env, win);
displayHandles[0] = reinterpret_cast<jlong>(window);
return JNI_TRUE;
#else
...
#endif
*/
}
I also implemented this in another way, around line 10370, from same above source:
/**
*
* #see http://www.javaworld.com/article/2075263/core-java/embed-java-code-into-your-native-apps.html
*
* #param instance - Vulkan instance
* #param nativeWindow - instance of android.view.Surface or java.awt.Canvas
* #param pAllocatorHandle - native handle to a VkAllocationCallbacks
* #param pSurface
* #return
*/
protected static native int vkCreateWindowSurface0(long instance,
Object nativeWindow,
long pAllocatorHandle,
long[] pSurface,
long[] awtDrawingSurface);/*
VkAllocationCallbacks* pAllocator = reinterpret_cast<VkAllocationCallbacks*>(pAllocatorHandle);
VkInstance vkInstance = reinterpret_cast<VkInstance>(instance);
VkSurfaceKHR* _pSurface = new VkSurfaceKHR[1];
VkResult res = VkResult::VK_ERROR_NATIVE_WINDOW_IN_USE_KHR;
#ifdef VK_USE_PLATFORM_ANDROID_KHR
ANativeWindow* window = NULL;
window = ANativeWindow_fromSurface(env, nativeWindow);
if (window == NULL)
return VkResult::VK_ERROR_NATIVE_WINDOW_IN_USE_KHR;
VkAndroidSurfaceCreateInfoKHR info;
info.sType = VkStructureType::VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR;
info.pNext = NULL;
info.flags = 0;
info.window = window;
res = vkCreateAndroidSurfaceKHR(vkInstance, &info, pAllocator, _pSurface);
#else
...
#endif
if(res >= 0){
pSurface[0] = reinterpret_cast<jlong>(_pSurface[0]);
}else{
pSurface[0] = (jlong)0;
fprintf(stderr,"Failed to create Vulkan SurfaceKHR.");
}
delete[] _pSurface;
return res;
}
I am currently playing around with the idea of an application for android which involves encryption. I am planning to use aes in ctr mode and PBKDF2 with whirlpool for key stretching.
I am going to implement a new bouncy castle implementation instead of androids built in old implementation. To make sure it works as intended on any android version.
But I am having some problem figuring out a solid way to generate random numbers for salt and key. I have read somewhere that the built in secure random in android is insecure in some old android versions and I have also heard that most android phones have a hard time keeping a high entropy in dev/random and blocks often. Shouldn’t that have a huge impact on the security of dev/urandom?
I am therefore looking for good ways to use the sensors on the phone to gather more entropy.
The following classes should help you alleviate issues with the Android SecureRandom class. This code was created instead of a text because otherwise the small details.
/**
* A strengthener that can be used to generate and re-seed random number
* generators that do not seed themselves appropriately.
*
* #author owlstead
*/
public class SecureRandomStrengthener {
private static final String DEFAULT_PSEUDO_RANDOM_NUMBER_GENERATOR = "SHA1PRNG";
private static final EntropySource TIME_ENTROPY_SOURCE = new EntropySource() {
final ByteBuffer timeBuffer = ByteBuffer.allocate(Long.SIZE / Byte.SIZE
* 2);
#Override
public ByteBuffer provideEntropy() {
this.timeBuffer.clear();
this.timeBuffer.putLong(System.currentTimeMillis());
this.timeBuffer.putLong(System.nanoTime());
this.timeBuffer.flip();
return this.timeBuffer;
}
};
private final String algorithm;
private final List<EntropySource> entropySources = new LinkedList<EntropySource>();
private final MessageDigest digest;
private final ByteBuffer seedBuffer;
/**
* Generates an instance of a {#link SecureRandomStrengthener} that
* generates and re-seeds instances of {#code "SHA1PRNG"}.
*
* #return the strengthener, never null
*/
public static SecureRandomStrengthener getInstance() {
return new SecureRandomStrengthener(
DEFAULT_PSEUDO_RANDOM_NUMBER_GENERATOR);
}
/**
* Generates an instance of a {#link SecureRandomStrengthener} that
* generates instances of the given argument. Note that the availability of
* the given algorithm arguments in not tested until generation.
*
* #param algorithm
* the algorithm indicating the {#link SecureRandom} instance to
* use
* #return the strengthener, never null
*/
public static SecureRandomStrengthener getInstance(final String algorithm) {
return new SecureRandomStrengthener(algorithm);
}
private SecureRandomStrengthener(final String algorithm) {
if (algorithm == null || algorithm.length() == 0) {
throw new IllegalArgumentException(
"Please provide a PRNG algorithm string such as SHA1PRNG");
}
this.algorithm = algorithm;
try {
this.digest = MessageDigest.getInstance("SHA1");
} catch (final NoSuchAlgorithmException e) {
throw new IllegalStateException(
"MessageDigest to create seed not available", e);
}
this.seedBuffer = ByteBuffer.allocate(this.digest.getDigestLength());
}
/**
* Add an entropy source, which will be called for each generation and
* re-seeding of the given random number generator.
*
* #param source
* the source of entropy
*/
public void addEntropySource(final EntropySource source) {
if (source == null) {
throw new IllegalArgumentException(
"EntropySource should not be null");
}
this.entropySources.add(source);
}
/**
* Generates and seeds a random number generator of the configured
* algorithm. Calls the {#link EntropySource#provideEntropy()} method of all
* added sources of entropy.
*
* #return the random number generator
*/
public SecureRandom generateAndSeedRandomNumberGenerator() {
final SecureRandom secureRandom;
try {
secureRandom = SecureRandom.getInstance(this.algorithm);
} catch (final NoSuchAlgorithmException e) {
throw new IllegalStateException("PRNG is not available", e);
}
reseed(secureRandom);
return secureRandom;
}
/**
* Re-seeds the random number generator. Calls the
* {#link EntropySource#provideEntropy()} method of all added sources of
* entropy.
*
* #param secureRandom
* the random number generator to re-seed
*/
public void reseed(final SecureRandom secureRandom) {
this.seedBuffer.clear();
secureRandom.nextBytes(this.seedBuffer.array());
for (final EntropySource source : this.entropySources) {
final ByteBuffer entropy = source.provideEntropy();
if (entropy == null) {
continue;
}
final ByteBuffer wipeBuffer = entropy.duplicate();
this.digest.update(entropy);
wipe(wipeBuffer);
}
this.digest.update(TIME_ENTROPY_SOURCE.provideEntropy());
this.digest.update(this.seedBuffer);
this.seedBuffer.clear();
// remove data from seedBuffer so it won't be retrievable
// reuse
try {
this.digest.digest(this.seedBuffer.array(), 0,
this.seedBuffer.capacity());
} catch (final DigestException e) {
throw new IllegalStateException(
"DigestException should not be thrown", e);
}
secureRandom.setSeed(this.seedBuffer.array());
wipe(this.seedBuffer);
}
private void wipe(final ByteBuffer buf) {
while (buf.hasRemaining()) {
buf.put((byte) 0);
}
}
}
And this is the small interface that is EntropySource:
/**
* A simple interface that can be used to retrieve entropy from any source.
*
* #author owlstead
*/
public interface EntropySource {
/**
* Retrieves the entropy.
* The position of the ByteBuffer must be advanced to the limit by any users calling this method.
* The values of the bytes between the position and limit should be set to zero by any users calling this method.
*
* #return entropy within the position and limit of the given buffer
*/
ByteBuffer provideEntropy();
}
Note that the output of the classes has not been tested for randomness (but this relies mainly on the returned SecureRandom class and should therefore be fine).
Finally, as I don't have the Android 1.6 runtime ready, somebody should test it against this or a lower version for compatibility (!).
I would like to play native camera shutter sound clip on camera preview capture. I'm referring to the sound clip played when takePicture() is called.
How could I that? Can someone walk me through the steps?
You can use the MediaActionSound class (available from API 16). For example:
MediaActionSound sound = new MediaActionSound();
sound.play(MediaActionSound.SHUTTER_CLICK);
However, for some baffling reason, the Google developer who made this class decided blasting the sounds with 100 % volume is a good idea. Unfortunately a lot of stuff in the class is private and it is not possible to easily enhance it just by extension. That's why I created a modified version (converted to Kotlin) of this class which respects a given stream volume (e. g. media, notification etc.) or which allows setting an arbitrary volume. Please note that there is no guarantee this will not break in the future since the file paths to the system sounds are hardcoded. On the other hand the worst implication this can have is a sound will not be played. I will also create unit tests for this and if it breaks I will update the code here.
Simply call playWithStreamVolume for using a stream volume or call play with optional arbitrary volume. Do not forget to update the package.
/*
* Copyright (C) 2012 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.your.package
import android.content.Context
import android.content.Context.AUDIO_SERVICE
import android.media.AudioAttributes
import android.media.AudioManager
import android.media.SoundPool
import android.util.Log
import java.lang.IllegalStateException
/**
* Modified by Miloš Černilovský on March 16 2021: converted original class to Kotlin,
* fixed a minor bug and added support for respecting stream volume or setting an arbitrary
* volume. New methods added: [playWithStreamVolume], volume parameters added to [play].
*
* A class for producing sounds that match those produced by various actions
* taken by the media and camera APIs.
*
* This class is recommended for use with the [android.hardware.camera2] API, since the
* camera2 API does not play any sounds on its own for any capture or video recording actions.
*
* With the older [android.hardware.Camera] API, use this class to play an appropriate
* camera operation sound when implementing a custom still or video recording mechanism (through the
* Camera preview callbacks with
* [Camera.setPreviewCallback][android.hardware.Camera.setPreviewCallback], or through GPU
* processing with [Camera.setPreviewTexture][android.hardware.Camera.setPreviewTexture], for
* example), or when implementing some other camera-like function in your application.
*
* There is no need to play sounds when using
* [Camera.takePicture][android.hardware.Camera.takePicture] or
* [android.media.MediaRecorder] for still images or video, respectively,
* as the Android framework will play the appropriate sounds when needed for
* these calls.
*/
#Suppress("MemberVisibilityCanBePrivate", "unused")
class MediaActionSound {
private val sounds: Array<SoundState> = SOUND_FILES.indices.map {
SoundState(it)
}.toTypedArray()
private val loadCompleteListener = SoundPool.OnLoadCompleteListener { soundPool, sampleId, status ->
for (sound in sounds) {
if (sound.id != sampleId) {
continue
}
var playSoundId = 0
synchronized(sound) {
if (status != 0) {
sound.state = STATE_NOT_LOADED
sound.id = 0
Log.e(TAG, "OnLoadCompleteListener() error: " + status +
" loading sound: " + sound.name)
return#OnLoadCompleteListener
}
when (sound.state) {
STATE_LOADING -> sound.state = STATE_LOADED
STATE_LOADING_PLAY_REQUESTED -> {
playSoundId = sound.id
sound.state = STATE_LOADED
}
else -> Log.e(TAG, "OnLoadCompleteListener() called in wrong state: "
+ sound.state + " for sound: " + sound.name)
}
}
if (playSoundId != 0) {
soundPool.play(playSoundId, sound.volumeLeft, sound.volumeRight, 0, 0, 1.0f)
}
break
}
}
private var _soundPool: SoundPool? = SoundPool.Builder()
.setMaxStreams(NUM_MEDIA_SOUND_STREAMS)
.setAudioAttributes(AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_SONIFICATION)
.setFlags(AudioAttributes.FLAG_AUDIBILITY_ENFORCED)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build())
.build().also {
it.setOnLoadCompleteListener(loadCompleteListener)
}
private val soundPool: SoundPool
get() {
return _soundPool ?: throw IllegalStateException("SoundPool has been released. This class mustn't be used after release() is called.")
}
private inner class SoundState(val name: Int) {
var id = 0
// 0 is an invalid sample ID.
var state: Int = STATE_NOT_LOADED
var volumeLeft: Float = 1f
var volumeRight: Float = 1f
}
/**
* Construct a new MediaActionSound instance. Only a single instance is
* needed for playing any platform media action sound; you do not need a
* separate instance for each sound type.
*/
#Suppress("ConvertSecondaryConstructorToPrimary", "RemoveEmptySecondaryConstructorBody")
constructor() {
}
private fun loadSound(sound: SoundState): Int {
val soundFileName = SOUND_FILES[sound.name]
for (soundDir in SOUND_DIRS) {
val id = soundPool.load(soundDir + soundFileName, 1)
if (id > 0) {
sound.state = STATE_LOADING
sound.id = id
return id
}
}
return 0
}
/**
* Preload a predefined platform sound to minimize latency when the sound is
* played later by [playWithStreamVolume].
* #param soundName The type of sound to preload, selected from
* SHUTTER_CLICK, FOCUS_COMPLETE, START_VIDEO_RECORDING, or
* STOP_VIDEO_RECORDING.
* #return True if the sound was successfully loaded.
* #see playWithStreamVolume
* #see SHUTTER_CLICK
* #see FOCUS_COMPLETE
* #see START_VIDEO_RECORDING
* #see STOP_VIDEO_RECORDING
*/
fun load(soundName: Int): Boolean {
if (soundName < 0 || soundName >= sounds.size) {
throw RuntimeException("Unknown sound requested: $soundName")
}
val sound = sounds[soundName]
return synchronized(sound) {
when (sound.state) {
STATE_NOT_LOADED -> {
if (loadSound(sound) <= 0) {
Log.e(TAG, "load() error loading sound: $soundName")
false
} else {
true
}
}
else -> {
Log.e(TAG, "load() called in wrong state: $sound for sound: $soundName")
false
}
}
}
}
/**
* Attempts to retrieve [AudioManager] from the given [context] and plays the given sound with the given [streamType] volume.
* If retrieving volume is not successful, [defaultVolume] is used. Finally calls [play].
* #param streamType One of [AudioManager] constants beginning with "STREAM_" prefix, e. g. [AudioManager.STREAM_MUSIC]
*/
fun playWithStreamVolume(soundName: Int, context: Context, streamType: Int = AudioManager.STREAM_MUSIC, defaultVolume: Float = 1f) {
playWithStreamVolume(soundName, context.getSystemService(AUDIO_SERVICE) as AudioManager?, streamType, defaultVolume)
}
/**
* Plays the given sound with the given [streamType] volume. If retrieving volume is not successful,
* [defaultVolume] is used. Finally calls [play].
* #param streamType One of [AudioManager] constants beginning with "STREAM_" prefix, e. g. [AudioManager.STREAM_MUSIC]
*/
fun playWithStreamVolume(soundName: Int, audioManager: AudioManager?, streamType: Int = AudioManager.STREAM_MUSIC, defaultVolume: Float = 1f) {
val volume = audioManager?.let { it.getStreamVolume(streamType) / it.getStreamMaxVolume(streamType).toFloat() } ?: defaultVolume
play(soundName, volume, volume)
}
/**
* Play one of the predefined platform sounds for media actions.
*
* Use this method to play a platform-specific sound for various media
* actions. The sound playback is done asynchronously, with the same
* behavior and content as the sounds played by
* [Camera.takePicture][android.hardware.Camera.takePicture],
* [MediaRecorder.start][android.media.MediaRecorder.start], and
* [MediaRecorder.stop][android.media.MediaRecorder.stop].
*
* With the [camera2][android.hardware.camera2] API, this method can be used to play
* standard camera operation sounds with the appropriate system behavior for such sounds.
*
* With the older [android.hardware.Camera] API, using this method makes it easy to
* match the default device sounds when recording or capturing data through the preview
* callbacks, or when implementing custom camera-like features in your application.
*
* If the sound has not been loaded by [load] before calling play,
* play will load the sound at the cost of some additional latency before
* sound playback begins.
*
* #param soundName The type of sound to play, selected from
* [SHUTTER_CLICK], [FOCUS_COMPLETE], [START_VIDEO_RECORDING], or
* [STOP_VIDEO_RECORDING].
* #param leftVolume left volume value (range = 0.0 to 1.0)
* #param rightVolume right volume value (range = 0.0 to 1.0)
* #see android.hardware.Camera.takePicture
* #see android.media.MediaRecorder
* #see SHUTTER_CLICK
* #see FOCUS_COMPLETE
* #see START_VIDEO_RECORDING
* #see STOP_VIDEO_RECORDING
*/
#JvmOverloads // for backward Java compatibility
fun play(soundName: Int, leftVolume: Float = 1f, rightVolume: Float = leftVolume) {
if (soundName < 0 || soundName >= SOUND_FILES.size) {
throw RuntimeException("Unknown sound requested: $soundName")
}
val sound = sounds[soundName]
synchronized(sound) {
when (sound.state) {
STATE_NOT_LOADED -> {
if (loadSound(sound) <= 0) {
Log.e(TAG, "play() error loading sound: $soundName")
} else {
setRequestPlayStatus(sound, leftVolume, rightVolume)
}
}
STATE_LOADING -> setRequestPlayStatus(sound, leftVolume, rightVolume)
STATE_LOADED -> soundPool.play(sound.id, leftVolume, rightVolume, 0, 0, 1.0f)
else -> Log.e(TAG, "play() called in wrong state: " + sound.state + " for sound: " + soundName)
}
}
}
private fun setRequestPlayStatus(sound: SoundState, leftVolume: Float, rightVolume: Float) {
with(sound) {
state = STATE_LOADING_PLAY_REQUESTED
volumeLeft = leftVolume
volumeRight = rightVolume
}
}
/**
* Free up all audio resources used by this MediaActionSound instance. Do
* not call any other methods on a MediaActionSound instance after calling
* release().
*/
fun release() {
_soundPool?.let {
for (sound in sounds) {
synchronized(sound) {
sound.state = STATE_NOT_LOADED
sound.id = 0
}
}
it.release()
_soundPool = null
}
}
companion object {
private const val NUM_MEDIA_SOUND_STREAMS = 1
private val SOUND_DIRS = arrayOf(
"/product/media/audio/ui/",
"/system/media/audio/ui/")
private val SOUND_FILES = arrayOf(
"camera_click.ogg",
"camera_focus.ogg",
"VideoRecord.ogg",
"VideoStop.ogg"
)
private const val TAG = "MediaActionSound"
/**
* The sound used by
* [Camera.takePicture][android.hardware.Camera.takePicture] to
* indicate still image capture.
* #see playWithStreamVolume
*/
const val SHUTTER_CLICK = 0
/**
* A sound to indicate that focusing has completed. Because deciding
* when this occurs is application-dependent, this sound is not used by
* any methods in the media or camera APIs.
* #see playWithStreamVolume
*/
const val FOCUS_COMPLETE = 1
/**
* The sound used by
* [MediaRecorder.start()][android.media.MediaRecorder.start] to
* indicate the start of video recording.
* #see playWithStreamVolume
*/
const val START_VIDEO_RECORDING = 2
/**
* The sound used by
* [MediaRecorder.stop()][android.media.MediaRecorder.stop] to
* indicate the end of video recording.
* #see playWithStreamVolume
*/
const val STOP_VIDEO_RECORDING = 3
/**
* States for SoundState.
* STATE_NOT_LOADED : sample not loaded
* STATE_LOADING : sample being loaded: waiting for load completion callback
* STATE_LOADING_PLAY_REQUESTED : sample being loaded and playback request received
* STATE_LOADED : sample loaded, ready for playback
*/
private const val STATE_NOT_LOADED = 0
private const val STATE_LOADING = 1
private const val STATE_LOADING_PLAY_REQUESTED = 2
private const val STATE_LOADED = 3
}
}
I've also offered this class to Google so please upvote the issue.
If the system file is there, you can use it like this:
public void shootSound()
{
AudioManager meng = (AudioManager) getContext().getSystemService(Context.AUDIO_SERVICE);
int volume = meng.getStreamVolume( AudioManager.STREAM_NOTIFICATION);
if (volume != 0)
{
if (_shootMP == null)
_shootMP = MediaPlayer.create(getContext(), Uri.parse("file:///system/media/audio/ui/camera_click.ogg"));
if (_shootMP != null)
_shootMP.start();
}
}
You may want to use SoundPool
SoundPool soundPool = new SoundPool(1, AudioManager.STREAM_NOTIFICATION, 0);
int shutterSound = soundPool.load(this, R.raw.camera_click, 0);
and then to play the sound
soundPool.play(shutterSound, 1f, 1f, 0, 0, 1);
Check out http://developer.android.com/reference/android/media/SoundPool.html to understand the parameters.
You will need a media file called camera_click.ogg in your project at res/raw. You should be able to use the Android default sound which can be obtained from the Android open source project in the following location ( frameworks/base/data/sounds/effects/camera_click.ogg ) if your project is licensed under the Apache license. If your project isn't licensed under the Apache license I have no idea if you can use it or not. I am not a lawyer.
This resource explains how to play audio files:
https://developer.android.com/guide/topics/media/index.html
You'll probably have to provide your own shutter sound effect.
This snippet will help you to play sound only when the ringer mode is in normal mode and not in silent or vibrate.
private var audioManager: AudioManager? = null
private var mediaPlayer: MediaPlayer? = null
private fun initAudio() {
Log.v(LOG_TAG, "initAudio")
audioManager ?: let {
audioManager = context!!.getSystemService(Context.AUDIO_SERVICE) as AudioManager?
}
mediaPlayer = try {
MediaPlayer().apply {
if (Build.VERSION.SDK_INT >= 21) {
val audioAttributes = AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build()
setAudioAttributes(audioAttributes)
} else {
setAudioStreamType(AudioManager.STREAM_MUSIC)
}
if (Build.VERSION.SDK_INT <= 28) {
setDataSource(context!!, Uri.parse("file:///system/media/audio/ui/camera_click.ogg"))
} else {
setDataSource(context!!, Uri.parse("file:///system/product/media/audio/ui/camera_click.ogg"))
}
prepare()
}
} catch (e: Exception) {
Log.e(LOG_TAG, "initAudio", e)
null
}
}
private fun playClickSound() {
if (audioManager?.ringerMode == AudioManager.RINGER_MODE_NORMAL) {
mediaPlayer?.start()
}
}
AudioManager meng = (AudioManager) getContext().getSystemService(Context.AUDIO_SERVICE);
int volume = meng.getStreamVolume( AudioManager.STREAM_NOTIFICATION);
if(volume != 0)
sound.play(MediaActionSound.SHUTTER_CLICK);