I have a Huawei U8950 phone and use it for Android Development. The problem is that the guys at Huawei has left their debug logs on and my logcat output is always flooded with hundreds of log messages like this:
E/OpenGLRenderer(10525): HUAWEI_DEBUG: glyph's Height = 21, Width = 13, current total allocated size out of MAX(1024) = 17
E/OpenGLRenderer(10525): HUAWEI_DEBUG: glyph's Height = 21, Width = 15, current total allocated size out of MAX(1024) = 16
E/OpenGLRenderer(10525): HUAWEI_DEBUG: glyph's Height = 21, Width = 15, current total allocated size out of MAX(1024) = 32
E/OpenGLRenderer(10525): HUAWEI_DEBUG: glyph's Height = 16, Width = 13, current total allocated size out of MAX(1024) = 16
...
These message are so so many (hundreds or even thousands) and make Eclipse hang (I think because of parsing and coloring) that I don't use it anymore and use adb logcat manually. However in there too I have to deal with them and see and scroll them.
Is there any way to disable them? (Software solutions for filtering them is not good, because the overhead of transferring and processing still remains)
If you install busybox you could do
logcat | busybox grep -v HUAWEI_DEBUG
in adb shell or ssh. Some builds have busybox ,but you will probably need to install it. Thankfully there are tons of apps that drag it along - most terminals do.
Related
We have a problem with our app. Over time we have noticed that the totalPss value gets very large (depending on the device, it will be 300-700Mb):
int pid = android.os.Process.myPid();
int pids[] = new int[1];
pids[0] = pid;
android.os.Debug.MemoryInfo[] memoryInfoArray = activityManager.getProcessMemoryInfo(pids);
if (memoryInfoArray.length > 0)
{
android.os.Debug.MemoryInfo thisProcessMemoryInfo = memoryInfoArray[0];
Log.d("totalPss", thisProcessMemoryInfo.getTotalPss()+"");
}
Here is a graph showing results from a typical run:
But, at the same time, the totalMemory value never gets very large (40-50Mb at max, but falls to 10Mb after GC).
Log.d("totalMem", Runtime.getRuntime().totalMemory()+"");
Here is a graph showing that from the same run as above (please ignore the units, it is actually in bytes):
getMemoryClass for this device indicates that we have 192Mb available for the app:
Log.d("getMemoryClass", activityManager.getMemoryClass()+"");
Our memory usage pattern is to make a number of large allocations over time which are frequently released. After a long time of this, a large allocation will fail which typically causes the app to fail.
It seems like we are probably having fragmentation, does this seem correct? Can we resolve this by using the largeHeap attribute (my intuition is that it would not)? Are there any tools to help diagnose this more certainly?
I am trying to obtain a bitmap of the number of cores which are online in an android device. I am trying to create a command line tool in C++ that does some additional functionality based on how many cores are on and in particular which cores are available.
I have tried to use the following to try and get the number of cores on in C++:
cpus = sysconf( _SC_NPROCESSORS_ONLN );
This gives me the number of cores in the system but not which cores are presently ON.
Does anyone know a potential way to do this?
There's no clear cut answer to this problem.
You can use nproc to see how many cores you have available, but this won't tell you how many cores you have online.
You can use top to view the utilization of each core. You can then parse the information from top to infer which cores are presently on.
I was able to get the core online status using this:
int numCPU = 1;
char *status = (char*)calloc(32,sizeof(char));
char *directory = (char*)calloc(1024,sizeof(char));
sprintf(directory, "/sys/devices/system/cpu/cpu%d/online", numCPU);
FILE *online = fopen(directory, "r");
if(online)
{
size = fread(status, sizeof(char), 32, online);
}
printf("Core %d status=%d", numCPU, status);
Update
After checking the time resolution, we tried to debug the problem in kernel space.
unsigned long long task_sched_runtime(struct task_struct *p)
{
unsigned long flags;
struct rq *rq;
u64 ns = 0;
rq = task_rq_lock(p, &flags);
ns = p->se.sum_exec_runtime + do_task_delta_exec(p, rq);
task_rq_unlock(rq, &flags);
//printk("task_sched runtime\n");
return ns;
}
Our new experiment shows that the time p->se.sum_exec_runtime is not updated instantly. But if we add printk() inside the function. the time will be updated instantly.
Old
We are developing an Android program.
However, the time measured by the function threadCpuTimenanos() is not always correct on our platform.
After experimenting, we found that the time returned from clock_gettime is not updated instantly.
Even after several while loop iterations, the time we get still doesn't change.
Here's our sample code:
while(1)
{
test = 1;
test = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &now);
printf(" clock gettime test 1 %lx, %lx , ret = %d\n",now.tv_sec , now.tv_nsec,test );
pre = now.tv_nsec;
sleep(1);
}
This code runs okay on an x86 PC. But it does not run correctly in our embedded platform ARM Cortex-A9 with kernel 2.6.35.13.
Any ideas?
I changed the clock_gettime to use the CLOCK_MONOTONIC_RAW , assigned the thread to one CPU and I get different values.
I am also working with a dual cortex-A9
while(1)
{
test = 1;
test = clock_gettime(CLOCK_MONOTONIC_RAW, &now);
printf(" clock gettime test 1 %lx, %lx , ret = %d\n",now.tv_sec , now.tv_nsec, test );
pre = now.tv_nsec;
sleep(1);
}
The resolution of clock_gettime is platform dependent. Use clock_getres() to find the resolution on your platform. According to the results of your experiment, clock resolutions on pc-x86 and on your target platform are different.
In the android CTS, there is a case that has the same problem. read timer twice but they are the same
testThreadCpuTimeNanos fail junit.framework.AssertionFailedError at
android.os.cts.DebugTest.testThreadCpuTimeNanos
$man clock_gettime
...
Note for SMP systems
The CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID clocks are realized on many platforms using timers from the CPUs (TSC on i386, AR.ITC on Itanium). These registers may differ between CPUs and as a consequence these clocks may return bogus results if a process is migrated to another CPU.
If the CPUs in an SMP system have different clock sources then there is no way to maintain a correlation between the timer registers since each CPU will run at a slightly different frequency. If that is the case then clock_getcpuclockid(0) will return ENOENT to signify this condition. The two clocks will then only be useful if it can be ensured that a process stays on a certain CPU.
The processors in an SMP system do not start all at exactly the same time and therefore the timer registers are typically running at an offset. Some architectures include code that attempts to limit these offsets on bootup. However, the code cannot guarantee to accurately tune the offsets. Glibc contains no provisions to deal with these offsets (unlike the Linux Kernel). Typically these offsets are small and therefore the effects may be negligible in most cases.
The CLOCK_THREAD_CPUTIME_ID clock measures CPU time spent, not realtime, and you're spending almost-zero CPU time. Also, CLOCK_THREAD_CPUTIME_ID (the thread-specific CPU time) is implemented incorrectly on Linux/glibc and likely does not even work at all on glibc. CLOCK_PROCESS_CPUTIME_ID or whatever that one's called should work better.
I use SurfaceView for play video. I use Samsung Galaxy Tab to test. I set size:
LinearLayout.LayoutParams videoViewParams = new LinearLayout.LayoutParams(m_mainView.getPictureWidth(), m_mainView.getPictureHeight());
mPreview = (SurfaceView) videoView.findViewById(R.id.surface);
mPreview.setLayoutParams(videoViewParams);
When mainView.getPictureWidth() or mainView.getPictureHeight() is higer then 1024 - i get message in logcat:
01-12 11:49:15.839: ERROR/SurfaceFlinger(2491): LayerBuffer init temp buff failed with w=1210, h=922, exp max=1024x1024 on 0
and i see only black screen.
Why?
In my application I use video scaling, and sometimes I need to get a video of a size greater than 1024.
It is suspected that this restriction only on Samsung. Checked on emulators - all ok!Found a single theme -
a similar problemt where people asked him to test the media player (and he says that for all its devices, the application works correctly). One user is the same problem on Samsung Galaxy S. Only he exp max = 800x800. Ie obtained here is taken the maximum value of screen sizes and forms the limit.
Any ideas?
I still have 2 ideas:
1)Make a zoom limit for all devices (Set the maximum size of video as a maximum size of one side of the screen). But in this case sometimes zoom in general will not or he will be very small.
2)Catch this log about error and show the user a dialogue that in such a zoom video to play will not work. But how to catch this log?
What do you think about this?
I want to acquirećAndroid Device VRAM size.
Is there a method for acquisition from the program?
Let's do some calculation using Nexus One:
Screen resolution is 480x800. So minimum required video memory size would be:
400 * 800 * 4 bytes = 1536000 bytes
Assuming that driver may (and normally should) use several buffers, we should also expect values like:
1536000 * 2 bytes = 3072000 bytes
1536000 * 3 bytes = 4608000 bytes
etc...
It would be weird to have values that are not multiple of 1536000 (or W x H x 4 in general).
After some searches on Android internals I've found this documentation :
...Android makes two requirements of the driver: a linear address space of mappable memory that it can write to directly...accessing the driver by calling open on /dev/fb0...
So I tried and take size of /dev/graphics/fb0 file (on my device there is no /dev/fb0).
But a direct approach doesn't work:
File file = new File("/dev/graphics/fb0");
file.length(); // ==0, doesn't work, no read access
Using next trick you can get actual size of fb0:
>adb pull /dev/graphics/fb0
1659 KB/s (4608000 bytes in 2.712s)
Video memory is ~4mb (Nexus One). Let's check if this is multiple of Nexus screen size:
4608000/1536000 = 3
It looks like a right value. And we also can say that driver uses three screen buffers.
So, as a conclusion, you can detect video memory size using adb, but you can't use this approach from your android application in runtime due to file access restrictions.
You typically do not have a dedicated "VRAM" on mobile devices. At least you don't have it with PowerVR architectures (wich totally dominate the market with their MBX and SGX cores).
That is, the OpenGL driver allocates normal RAM until you run out of it, and the more you allocate the less you have left for your application.
The Android/OpenGL APIs don't offer explicit methods to read the VRAM size from a given device.
Poor man solution:
You could try to infer the VRAM size in an empiric way adding 1MB texture until you get an out of memory error from gl.glGetError().
From your "dmesg" output u can read off the VRAM, so for my Tablet:
> [ 0.000000] Machine: TDM3730 [ 0.000000] Reserving 12582912
> bytes SDRAM for VRAM
>
> 7>[ 3.929962] VRAM: checking region 9f400000 3072
> <4>[ 3.929992] Failed. Allocating 4194304 bytes for fb 0
> <7>[ 3.935333] VRAM: alloc mem type 0 size 4194304 paddr dec2bd4c
> <7>[ 3.935485] VRAM: checking region 9f400000 3072
> <7>[ 3.935485] VRAM: found 9f400000, end a0000000
> <6>[ 3.936584] android_usb gadget: high speed config #1: android
> <4>[ 3.960113] allocating 4194304 bytes for fb 1
or details at:
http://pastebin.com/jQSXQqHh
Is simple just count how many Mb ram that from usable to real capacity of the ram, example for my lenovo a369i has 512 RAM Module, but in setting app only showing 471 Mb usable so the 41Mb left is reserved for the GPU, so the conclusion is my a369i has 41Mb vram
This method is based from shared graphics memory (wiki)
I suspect that android.os.StatFs is what you're looking for:
http://developer.android.com/reference/android/os/StatFs.html