I have followed swap to zram section in this link: https://source.android.com/devices/low-ram.html.
Given in the link:
zRAM swap can increase the amount of memory available in the system by compressing memory pages and putting them in a dynamically allocated swap area of memory.
Again, since this is trading off CPU time for a small increase in memory, you should be careful about measuring the performance impact zRAM swap has on your system.
Android handles swap to zRAM at several levels:
First, the following kernel options must be enabled to use zRAM swap effectively:
CONFIG_SWAP
CONFIG_CGROUP_MEM_RES_CTLR
CONFIG_CGROUP_MEM_RES_CTLR_SWAP
CONFIG_ZRAM
Then, you should add a line that looks like this to your fstab:
/dev/block/zram0 none swap defaults zramsize=<size in bytes>,swapprio=<swap partition priority>
zramsize is mandatory and indicates how much uncompressed memory you want the zram area to hold. Compression ratios in the 30-50% range are usually observed.
swapprio is optional and not needed if you don't have more than one swap area.
By default, the Linux kernel swaps in 8 pages of memory at a time. When using ZRAM, the incremental cost of reading 1 page at a time is negligible and may help in case the device is under extreme memory pressure. To read only 1 page at a time, add the following to your init.rc:
`write /proc/sys/vm/page-cluster 0`
In your init.rc, after the `mount_all /fstab.X` line, add:
`swapon_all /fstab.X`
The memory cgroups are automatically configured at boot time if the feature is enabled in kernel.
If memory cgroups are available, the ActivityManager will mark lower priority threads as being more swappable than other threads. If memory is needed, the Android kernel will start migrating memory pages to zRAM swap, giving a higher priority to those memory pages that have been marked by ActivityManager.
As per the steps given below: zramfs is getting created with node /dev/zram0.
I wanted to test the performance by doing read/write.(I have allotted zram size as 50MB)
For testing I came across the link:https://code.google.com/p/compcache/wiki/zramperf
Read test given in the link:
Read benchmark
1G file was copied to (z)ramdisk which was then synchronously read into /dev/null using different block sizes.
if [ -z "$1" ]; then
echo "Missing file to read into /dev/null"
exit 1
else
FI="$1"
fi
if [ -z "$2" ]; then
echo "Missing file to dump output"
exit 1
else
FO="$2"
fi
for B in 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M; do
echo "$B" | tee -a "$FO"
echo 1 > /proc/sys/vm/drop_caches
echo 1 > /proc/sys/vm/drop_caches
sleep 1
dd if="$FI" of=/dev/null bs="$B" iflag=sync 2>&1 | tee -a "$FO"
done
Write test:
Write benchmark
First make sure that test (512M) file was completely cached. Then synchronously write this file to (z)ramdisk.
if [ -z "$1" ]; then
echo "Missing file to be written"
exit 1
else
FI="$1"
fi
# make sure input file is cached
dd if="$FI" of=/dev/null
dd if="$FI" of=/dev/null
dd if="$FI" of=/dev/null
dd if="$FI" of=/dev/null
for B in 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M; do
echo "$B" | tee -a "$FO"
dd if="$FI" of=`basename "$FI"` bs="$B" oflag=sync 2>&1 | tee -a "$FO"
done
I am not comfortable with shell. Further someone please tell what should i give arg option for 1 and 2 in read benchmark and arg option 1 in write benchmark.
Please help me run performance test on this.
thanks.
Related
I'm writing a simple Bash script that simply the call of HadnBrakeCli for render videos.
I also implemented a simple queue option: the queue file just store the line-command it has to call to start a render.
So I wrote a while-loop to read one line at time, eval $line and repeat untill file ends.
if [[ ${QUEUE_MODE} = 'RUN' ]]; then
QUEUE_LEN=`cat ${CONFIG_DIR}/queue | wc -l`
QUEUE_POS='1'
printf "Queue lenght:\t ${QUEUE_LEN}\n"
while IFS= read line; do
echo "--Running render ${QUEUE_POS} on ${QUEUE_LEN}..."
echo "++" && echo "$line" && echo "++"
eval "${line}"
tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue"
echo "--Render ended"
QUEUE_POS=`expr $QUEUE_POS + 1`
done < "${CONFIG_DIR}/queue"
exit 0
The problem is that any command makes the loop to work fine (empty line, echo "test"...), but as soon a proper render is loaded, it is launched and finished correctly, but also the loops exists.
I am a newbie so I tried some minor changes to see what effect I got, but nothing change the result.
I commented the command tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue" or I added/removed IFS= in the while-loop or removed the -r in read command.
Sorry if the question is trivial, but I'm really missing some major part in how it works, so I have no idea even how to search for the solution.
I'll put a sample of a general render in the queue file.
HandBrakeCLI -i "/home/andrea/Videos/done/Rap dottor male e mini me.mp4" -o "/hdd/Render/Output/Rap dottor male e mini me.mkv" -e x265 -q 23 --encoder-preset faster --all-audio -E av_aac -6 dpl2 --all-subtitles -x pmode:pools='16' --verbose=0 2>/dev/null
HandBrakeCLI reads from standard input, which steals the rest of the queue file before read line can see it. My favorite solution to this is to pass the file over something other than standard input, like file descriptor #3:
...
while IFS= read line <&3; do # The <&3 makes it read from FD #3
...
done 3< "${CONFIG_DIR}/queue" # The 3< redirects the file into FD #3
Another way to avoid the problem is to redirect input to the HandBrakeCLI command:
...
eval "${line}" </dev/null
...
There's some more info about this in BashFAQ #89: I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
Also, I'm not sure I trust the way you're using tail to remove lines from the queue file as they're executed. I'm not sure it's really wrong, it just looks fragile to me. Also, I'd recommend using lower- or mixed-case variable names, since there are a bunch of all-caps names with special meanings, and re-using one of them by mistake can have weird consequences. Finally, I'd recommend running your script through shellcheck.net, as it'll make some other good recommendations.
[BTW, this question is a duplicate of "Bash script do loop exiting early", but that doesn't have any upvoted or accepted answers.]
Need to build android source build with full CPU utilisation.
For that, how to calculate N in " make -jN " ?
Sample CPUinfo:
Your Linux distro should come with the command nprocs, or at least nprocs should be easily installable. If you don't want to require nprocs, this shell command will give you the number of cores in the box (including hyper-threaded ones): ls -d /sys/devices/system/cpu/cpu[0-9]*|wc -l
cores=$(grep -c ^processor /proc/cpuinfo)
make -j${cores}
i'm writing a shell script for android and i need to get the last created directory, i would usually use
ls -t | head -1 but ls -t gives me the error "ls: Unknown option '-t'"
is there another shell command that can order the files by time-stamp or another way to do that in android? the busy box is more limited
It looks like stat is available in BusyBox, which means that you could do something like this:
stat -c '%Y %n' */ | sort -nr | cut -d' ' -f2-
This passes the names of all directories (paths ending in a slash) to stat, which prints the last modification time (seconds since the UNIX epoch) and the filename. These are sorted in reverse numerical order and then the time field is stripped from each line.
This assumes that your directory names don't contain newlines, otherwise the sorting would be messed up.
I'm using a quad-core smartphone.
I want to know how to force three cores or two cores offline. Thus I can measure the performance of different active core counts at different frequency level running a specified benchmark.
I can manage the core frequency through "userspace governor". However, I can't shut down cores. When I run benchmarks, the idle cores will wake up.
I've connect to the phone using "adb shell". I can get the root access either.
Could anyone help to solve this problem? Thanks in advance.
run following commands to turn off cpu1, cpu2, cpu3.
adb root
adb shell stop mpdecision
adb shell
echo "0" > /sys/devices/system/cpu/cpu1/online
echo "0" > /sys/devices/system/cpu/cpu2/online
echo "0" > /sys/devices/system/cpu/cpu3/online
I had to disable hotplug first for the Galaxy-S7 device to prevent CPUs to return to online state:
echo 0 > /sys/devices/system/cpu/cpuhotplug/enabled
On my device each write access to this file cause CPU state reset. So, check existing value first to not run into troubles:
if [[ 0 != $(cat /sys/devices/system/cpu/cpuhotplug/enabled) ]]; then
echo 0 > /sys/devices/system/cpu/cpuhotplug/enabled
fi
You can force "online" status by changing permissions for the corresponding file:
# Without stopping this service, the following approach will fail
# You can run it after. This will increase battery life. So, I suggest to run it.
stop mpdecision
# Make the file writable
chmod 664 /sys/devices/system/cpu/cpu0/online
# Make the core always offline
echo 0 > /sys/devices/system/cpu/cpu0/online
# Make the file read-only.
# Now "online" status will not be changed by external apps
chmod 444 /sys/devices/system/cpu/cpu0/online
# Run the service again
start mpdecision
You have to run all that stuff for every cpu core.
I'd suggest to create a bash script, like that:
...
set_core_offline () {
local core=$1
chmod 664 /sys/devices/system/cpu/cpu$core/online
echo 0 > /sys/devices/system/cpu/cpu$core/online
chmod 444 /sys/devices/system/cpu/cpu$core/online
}
# Works for 4-core CPUs
set_cores_offline () {
set_core_offline 0
set_core_offline 1
set_core_offline 2
set_core_offline 3
}
...
And, of course, this solution is not perfect. Look again, at the code snippet:
echo 0 > /sys/devices/system/cpu/cpu0/online
chmod 444 /sys/devices/system/cpu/cpu0/online
These are two separate commands. After execution of the first one, an external app might change "online" status to "1" again. After that the second command will fix this status as unchangeable. So, the clearest solution would be to wrap these 2 commands into a loop and check the status until we'd get desired results.
I am logging the data coming from top and putting it into a circular set of files. I am not executing top for one set of data and then rerunning for the next set, but instead using a read time out to specify when to go from one log file to the next. This is primarily done this way to remove the startup CPU load cost every time top is executed. The shell script file's name is toplog.sh and looks similar to this:
#!/data/data/com.spartacusrex.spartacuside/files/system/bin/bash
date
echo " Logging started."
fileCmp()
{
test `ls -lc "$1" | sed -n 's/\([^ ]* *\)\{4\}\([0-9]*\).*$/\2/;p'` $2 $3
}
oldest()
{
ls -rc $1 2> /dev/null |head -1
}
file=`oldest /mnt/sdcard/toplog.\*.gz`
echo " Oldest file is $file"
if [ -z "$file" ]; then
x=0
else
file=${file%%.gz}
file=${file##*.}
x=$file
fi
echo " x=$x"
top -d 20 -b | \
while true; do
file=/mnt/sdcard/toplog.$x.gz
while read -t 5 line; do
echo "$line"
done | gzip -c > $file
if fileCmp "$file" -le 300; then
date
echo " Failure to write to file '$file'."
exit
fi
x=$((($x+1)%10))
sleep 14
done
I execute this using nohup so that when the shell dies, this process still runs, like so:
$ nohup ./toplog.sh
But there's a problem. top terminates when I exit the shell session that executed that command, and I'm not exactly sure why. Any ideas?
To clarify, I'm logging on a Android phone. The tools are limited in functionality (i.e. lack some of these switches) and is why I am using top as it contains the output I want.
Version of busybox I'm using is:
BusyBox 1.19.2 (2011-12-12 12:59:36 GMT)
Installed when I installed Terminal IDE.
BTW, this phone is not rooted. I'm trying to track down a failure when my phone responds as if the CPU has spiked and won't go down.
Edit:
Well, I found a workaround. But the reason is a bit hazy. I think it has to do with process management and smells of a bug in the busybox ver that I'm using that was missed during regression testing.
The workaround is to wrap top with a useless loop structure like this: while true; do top; done. Through testing, top never gets killed and never gets respawned, but by wrapping it up, it isn't killed.
Any insights on this?
going to sound stupid, but change your startup command from
nohup ./toplog.sh
to
nohup ./toplog.sh &
the & makes it run as a background process further removing it from the terminal stack.
Running the bash internal command "disown" on your script's process before logging off may prevent it from being signaled.