I need to use a random function, but also have it repeating on different devices (PC / iOS / Android).
I'm running this sample code, to shuffle a vector:
#include <algorithm>
#include <iostream>
#include <iterator>
#include <random>
#include <vector>
int main() {
std::mt19937 generator(1337);
std::cout << "Your seed produced: " << generator() << std::endl;
std::vector<int> v = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
std::shuffle(v.begin(), v.end(), generator);
std::copy(v.begin(), v.end(), std::ostream_iterator<int>(std::cout, " "));
std::cout << "\n";
return 0;
}
Output from two different PCs (windows):
Your seed produced: 1125387415
10 6 8 1 7 2 4 3 5 9
Output from iOS:
Your seed produced: 1125387415
9 1 4 6 7 8 5 3 10 2
Why am I getting different results?
Is there another dependency relating to the OS itself?
How is it possible to get this to work cross-platform?
std::mt19937 is rigorously defined by the standard and has no room for platform specific/implementation defined behaviour, your problem doesn't lie here.
The problem is with std::shuffle, which in no way says how it is supposed to use the random number generator, just that is has to use it.
Which unfortunately means, if you want a reproducible shuffling behaviour, you might need to implement your own.
The std::shuffle function’s third argument changed. It was a function object that should return a value in the range of [0,n). That means you could pass a std::uniform_int_distribution. Now it takes a random bit source like std::mt19937. Which does your library expect?
Related
On Android, we have C++ native application that creates a new thread using the POSIX pthread library. We are using the pthread_attr_setstacksize to set the stack size to 512K (before we create new thread), but stack size always defaults to 1M.
Following is the sample code,
1 #include <iostream>
2 #include <pthread.h>
3 #include <sys/time.h>
4 #include <sys/resource.h>
5
6 void* Function(void *ptr)
7 {
8 pthread_attr_t attr;
9 pthread_attr_init(&attr);
10 size_t get_default_size;
11 pthread_attr_getstacksize(&attr, &get_default_size);
12 std::cout<<pthread_self()<<" Stack size = "<<get_default_size<<std::endl;
13 return NULL;
14 }
15
16 int main ( int argc, char *argv[] )
17 {
18 pthread_attr_t attr;
19 pthread_attr_init(&attr);
20 if ( pthread_attr_setstacksize(&attr, 1024 * 512) == 0)
21 std::cout<<"Setting stack size successful"<<std::endl;
22
23 pthread_t thread_id;
24 /* creating a new thread with thread stack size set */
25 pthread_create(&thread_id, &attr, &Function, NULL);
26 pthread_join(thread_id,0);
27 }
So, when i run the above code i always get the following output,
CT60-L1-C:/data/data/files $ ulimit -s
8192
CT60-L1-C:/data/data/com.foghorn.edge/files $ ./nativeThread
Setting stack size successful
520515536112 Stack size = 1032192
CT60-L1-C:/data/data/files $
Eventhough, the ulimit -s stack size is 8192K, and i am explicitly setting the stack size to 512K (line no. 20) in the source code, the output from the pthread_attr_getstacksize (line no. 11) from the thread is always 1M.
So I have 2 questions:
Is using pthread_attr_setstacksize correct way to set the stack size even for Android using the pthread POSIX library ?
How can we set the stack size on Android, certainly the ulimit -s has no effect on the stack size for the new threads that are created ?
Any help is appreciated
I have tried changing the ulimit -s to a different size and still i always get the stack size from thread to be 1M. It feels we cant change the stack size on the Android (like it does in Ubuntu atleast)
Found the answer to the problem, the pthread_attr_getstacksize does not represent the stack size of the current thread.
The way i verified that the pthread_attr_setstacksize is working fine by allocating (using alloca) memory on the stack and check if it actually works correctly.
The updated source code is as follows,
#include <iostream>
#include <pthread.h>
#include <sys/time.h>
#include <sys/resource.h>
void* Function(void *ptr)
{
for ( int i = 1,count=1 ;;count++ ) {
size_t size = i * 1024 * 1024 ;
char *allocation = (char *)alloca(size);
allocation[0] = 1;
std::cout<<count<<std::endl;
}
return NULL;
}
int main ( int argc, char *argv[] )
{
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_t thread_id;
/* creating a new thread with thread stack size set */
pthread_create(&thread_id, &attr, &Function, NULL);
pthread_join(thread_id,0);
}
I'm developing a VoIP application that runs at the sampling rate of 48 kHz. Since it uses Opus, which uses 48 kHz internally, as its codec, and most current Android hardware natively runs at 48 kHz, AEC is the only piece of the puzzle I'm missing now. I've already found the WebRTC implementation but I can't seem to figure out how to make it work. It looks like it corrupts the memory randomly and crashes the whole thing sooner or later. When it doesn't crash, the sound is kinda chunky as if it's quieter for the half of the frame. Here's my code that processes a 20 ms frame:
webrtc::SplittingFilter* splittingFilter;
webrtc::IFChannelBuffer* bufferIn;
webrtc::IFChannelBuffer* bufferOut;
webrtc::IFChannelBuffer* bufferOut2;
// ...
splittingFilter=new webrtc::SplittingFilter(1, 3, 960);
bufferIn=new webrtc::IFChannelBuffer(960, 1, 1);
bufferOut=new webrtc::IFChannelBuffer(960, 1, 3);
bufferOut2=new webrtc::IFChannelBuffer(960, 1, 3);
// ...
int16_t* samples=(int16_t*)data;
float* fsamples[3];
float* foutput[3];
int i;
float* fbuf=bufferIn->fbuf()->bands(0)[0];
// convert the data from 16-bit PCM into float
for(i=0;i<960;i++){
fbuf[i]=samples[i]/(float)32767;
}
// split it into three "bands" that the AEC needs and for some reason can't do itself
splittingFilter->Analysis(bufferIn, bufferOut);
// split the frame into 6 consecutive 160-sample blocks and perform AEC on them
for(i=0;i<6;i++){
fsamples[0]=&bufferOut->fbuf()->bands(0)[0][160*i];
fsamples[1]=&bufferOut->fbuf()->bands(0)[1][160*i];
fsamples[2]=&bufferOut->fbuf()->bands(0)[2][160*i];
foutput[0]=&bufferOut2->fbuf()->bands(0)[0][160*i];
foutput[1]=&bufferOut2->fbuf()->bands(0)[1][160*i];
foutput[2]=&bufferOut2->fbuf()->bands(0)[2][160*i];
int32_t res=WebRtcAec_Process(aecState, (const float* const*) fsamples, 3, foutput, 160, 20, 0);
}
// put the "bands" back together
splittingFilter->Synthesis(bufferOut2, bufferIn);
// convert the processed data back into 16-bit PCM
for(i=0;i<960;i++){
samples[i]=(int16_t) (CLAMP(fbuf[i], -1, 1)*32767);
}
If I comment out the actual echo cancellation and just do the float conversion and band splitting back and forth, it doesn't corrupt the memory, doesn't sound weird and runs indefinitely. (I do pass the farend/speaker signal into AEC, I just didn't want to make the mess of my code by including it in the question)
I've also tried Android's built-in AEC. While it does work, it upsamples the captured signal from 16 kHz.
Unfortunately, there is no free AEC package that support 48khz. So, either move to 32khz or use a commercial AEC package at 48khz.
For a school project I am creating an android app that involves streaming image data. I've finished all the requirements about a month and a half early, and am looking for ways to improve my app. One thing I heard of is using the android NDK to optimize heavily used pieces of code.
What my app does is simulate a live video coming in over a socket. I am simultaneously reading the pixel data from a UDP packet, and writing it to an int array, which I then use to update the image on the screen.
I'm trying to decide if trying to increase my frame rate (which is about 1 fps now, which is sufficient for my project) is the right path to follow for my remaining time, or if I should instead focus on adding new features.
Anyway, here is the code I am looking at:
public void updateBitmap(byte[] buf, int thisPacketLength, int standardOffset, int thisPacketOffset) {
int pixelCoord = thisPacketOffset / 3 - 1;
for (int bufCoord = standardOffset; bufCoord < thisPacketLength; bufCoord += 3) {
pixelCoord++;
pixelData[pixelCoord] = 0xFF << 24 | (buf[bufCoord + 2] << 16) & 0xFFFFFF | (buf[bufCoord + 1] << 8) & 0xFFFF | buf[bufCoord] & 0xFF;
}
}
I call this function about 2000 times per second, so it definitely is the most used piece of code in my app. Any feedback on if this is worth optimizing?
Why not just give it a try? There are many guides to creating functions using NDK, you seem to have a good grasp of the reasoning to do so and understand the implications so it should be easy to translate this small function.
Compare the two approaches, you will no doubt learn something which is always good, and it will give you something to write about if you need to write a report to go with the project.
Following the answer from this StackOverflow question how do I create the proper
integer for mask?
I made some googling and the everything I found uses CPU_SET macro from sched.h but it operates on cpu_set_t structures which are undefined when using NDK. When try using CPU_SET linker gives me undefined reference error (even though I link against pthread).
Well, in the end I found some version which was taken directly from sched.h. Im posting this here if anyone has the same problem and doesn't want to spend the time searching for it. This is quite useful.
#define CPU_SETSIZE 1024
#define __NCPUBITS (8 * sizeof (unsigned long))
typedef struct
{
unsigned long __bits[CPU_SETSIZE / __NCPUBITS];
} cpu_set_t;
#define CPU_SET(cpu, cpusetp) \
((cpusetp)->__bits[(cpu)/__NCPUBITS] |= (1UL << ((cpu) % __NCPUBITS)))
#define CPU_ZERO(cpusetp) \
memset((cpusetp), 0, sizeof(cpu_set_t))
This works well when the parameter type in the original setCurrentThreadAffinityMask (from the post mentioned in the question) is simply replaced with cpu_set_t.
I would like to pay your attention that function from link in the first post doesn't set the thread cpu affinity. It suits to set the process cpu affinity. Of course, if you have one thread in your application it works well but it is wrong for several threads. Check up sched_setaffinity() description for example on http://linux.die.net/man/2/sched_setaffinity
Try add this before your include <sched.h>
#define _GNU_SOURCE
I am using the official Android port of SDL 1.3, and using it to set up the GLES2 renderer. It works for most devices, but for one user, it is not working. Log output shows the following error:
error of type 0x500 glGetIntegerv
I looked up 0x500, and it refers to GL_INVALID_ENUM. I've tracked down where the problem occurs to the following code inside the SDL library: (the full source is quite large and I cut out logging and basic error-checking lines, so let me know if I haven't included enough information here)
glGetIntegerv( GL_NUM_SHADER_BINARY_FORMATS, &nFormats );
glGetBooleanv( GL_SHADER_COMPILER, &hasCompiler );
if( hasCompiler )
++nFormats;
rdata->shader_formats = (GLenum *) SDL_calloc( nFormats, sizeof( GLenum ) );
rdata->shader_format_count = nFormats;
glGetIntegerv( GL_SHADER_BINARY_FORMATS, (GLint *) rdata->shader_formats );
Immediately after the last line (the glGetIntegerv for GL_SHADER_BINARY_FORMATS), glGetError() returns GL_INVALID_ENUM.
The problem is the GL_ARB_ES2_compatibility extension is not properly supported on your system.
By GL_INVALID_ENUM it means that it does not know the GL_NUM_SHADER_BINARY_FORMATS and GL_SHADER_BINARY_FORMATS enums, which are a part of the said extension.
In contrast, GL_SHADER_COMPILER was recognized, which is strange.
You can try using GL_ARB_get_program_binary and using these two instead:
#define GL_NUM_PROGRAM_BINARY_FORMATS 0x87fe
#define GL_PROGRAM_BINARY_FORMATS 0x87ff
Note that these are different from:
#define GL_SHADER_BINARY_FORMATS 0x8df8
#define GL_NUM_SHADER_BINARY_FORMATS 0x8df9
But they should pretty much do the same.