Simulate touch, hold, move in android debug bridge - android

Rather than using a drag or swipe command in the android debug bridge or AndroidViewClient like this:
device.drag((600,800),(600,1200), 1000)
device.shell('input touchscreen swipe 600 800 600 1200 1000')
Is there some way to simulate something like the following?
1. press down on some coordinates (eventType=DOWN)
2. sleep 2 seconds (i.e. keep holding there)
3. move to some other coordinates
2. sleep 2 seconds (i.e. keep holding there)
5. release (eventType=UP)
Basically, you touch, hold there for a few seconds, drag and keep holding there for a few seconds, then release the pad.

If you take a look at AdbClient.longPress() you will see how the long press event is sent for some keys:
if name in KEY_MAP:
self.shell('sendevent %s 1 %d 1' % (dev, KEY_MAP[name]))
self.shell('sendevent %s 0 0 0' % dev)
time.sleep(duration)
self.shell('sendevent %s 1 %d 0' % (dev, KEY_MAP[name]))
self.shell('sendevent %s 0 0 0' % dev)
You can do something similar for your case.
To get an idea of what you should write, do the same set of events you mentioned and analyze them using getevent.

Related

How to determine current AMOLED screen mode?

Many Android devices with AMOLED screens display all images with oversaturated colors by default. E.g. Samsung Galaxy phones have the "Adaptive" screen mode, which forces windows of all apps to be displayed as if they were rendered in the native screen color space, which is wider than Display-P3.
OTOH, not all such devices support EGL_EXT_gl_colorspace_display_p3, regardless of screen mode, so I can't be sure whether the device my app is running on even has a wide-gamut screen, even less determine whether this mode is the default.
So, how can I actually determine whether current screen mode is sRGB or some wide-gamut mode? I'm targeting one specific device model, Samsung Galaxy A320F/DS (AKA "A3 (2017)"), so platform-specific ways are also OK.
There are several layers where colors can be manipulated.
SurfaceFlinger. This component is common to all Android systems. One can pass a custom color matrix to it (see the source code of the handler of this request) via e.g. the following command executed as the root user:
service call SurfaceFlinger 1015 i32 1 \
f 0 f 0 f 1 f 0 \
f 0 f 1 f 0 f 0 \
f 1 f 0 f 0 f 0 \
f 0 f 0 f 0 f 1
The above example command sets a matrix that will, acting on RGBA vectors, swap red and blue channels. To reset the custom matrix to default (identity) you can simply do
service call SurfaceFlinger 1015 i32 0
You might be able to do all this from a Java/JNI app without root privileges, simply asking for some permission, I didn't research this.
mDNIe, which stands for mobile Digital Natural Image engine. It's a Samsung-specific system that acts on a lower level than SurfaceFlinger. Namely, it affects Always On Display, on which SurfaceFlinger's custom color matrix doesn't have any effect.
Current screen mode can be seen in the /sys/class/mdnie/mdnie/mode file, which appears to have the following mapping of values on Galaxy A320F/DS:
0 — AMOLED cinema (apparently aims at Display-P3),
1 — AMOLED photo (apparently aims at Adobe RGB),
2 — Basic (aims at sRGB),
3 — (don't know its purpose, but the value is accepted if written to mode)
4 — Adaptive display (the widest, apparently native screen color space).
5 — (don't know its purpose, but the value is accepted if written to mode)
Moreover, the colors are also affected by the Cool — Warm slider as well as Advanced options RGB per-channel adjustments. Changes to the former are somehow reflected in mdnie_ldu and sensorRGB files in the same directory, while the latter directly corresponds to whiteRGB file.
Also, Blue light filter feature state is reflected in the night_mode file (it also influences mdnie_ldu and sensorRGB files mentioned above).
Of the files described above, only mode is readable to a non-root user on SM-A320F/DS. On SM-G950FD (AKA "S8") nothing is accessible without root.

FFmpeg : How to create dynamic volume changes in an audio file?

I am working on a project in which I need to change the volume of an audio file dynamically.
Let's say, I have an audio file with name xyz.mp3 (20 - seconds audio file).
I need to set the volume in it like :
Time Range (in Seconds) || Volume Percentage (in %)
-------------------------------------------------------------------
0 - 4 || 100
||
4 - 8 || change from 100 - 20 (dynamically)
||
8 - 12 || 20
||
12 - 16 || change from 20 - 100 (dynalically)
||
16 - 20 || 100
Now, I know that to change the volume for a particular time in audio, I can use the following command :
ffmpeg -i in.mp3 -af volume=20:enable='between(t,8,12)' out.mp3
but when I use volume effect, it does not change the volume dynamically. It just straightly change volume from 100 to 20 and not change it like fading.
and When I use afade command like :
ffmpeg -i in.mp3 -af afade=t=in:ss=4:d=8,afade=t=out:st=12:d=16 out.mp3
or
ffmpeg -i in.mp3 -af afade=enable='between(t,4,8)':t=in:ss=4:d=4,afade=enable='between(t,12,16)':t=out:st=12:d=4 out.mp3
but it looks that afade does not work multiple times even when I am using ffmpeg 3.0.1 version.
As afade only works single time, I had also split my audio into parts of 4 second and add fade effects to it and then combine them again, but there is some milliseconds gap comes between each clip. Does anyone know a better way to do it? Please help me...
Update 1 :
Here is that code I used :
"volume='" +
"between(t,0,8)+(1-0.8*(t-8)/4)*" + // full
"between(t,8.01,11.99)+0.1*" + // change from HIGH -> LOW
"between(t,12,16)+(0.1+0.8*(t-16)/4)*" + // low
"between(t,16.01,19.99)+1*" + // change from LOW -> HIGH -
"between(t,20,24)+(1-0.8*(t-24)/4)*" + // full
"between(t,24.01,27.99)+0.1*"+ // change from HIGH -> LOW -
"between(t,28,32)+(0.1+0.8*(t-32)/4)*" + // low
"between(t,32.01,35.99)+1*" + // change from LOW -> HIGH -
"between(t,36,40)+(1-0.8*(t-40)/4)*" + // full
"between(t,40.01,43.99)+0.1*"+ // change from HIGH -> LOW -
"between(t,44,48)+(0.1+0.8*(t-48)/4)*" + // low
"between(t,48.01,51.99)+" + // change from LOW -> HIGH -
"between(t,52,56)" + // high
"':eval=frame";
In this code, I got a small (some milliseconds gap) at those places where I initialize the audio to change the volume
Update 2
Ok I got it, I just need to change the time values like 19.99 to 19.9999 and 16.01 to 16.0001 and it solve the problem. Thank You Gyaan Sir.
Use
volume='between(t,0,4)+(1-0.8*(t-4)/4)*between(t,4.01,7.99)+0.2*between(t,8,12)+(0.2+0.8*(t-12)/4)*between(t,12.01,15.99)+between(t,16,20)':eval=frame

Kmsg timestamps are 500ms in the future

I am trying to keep track of when the system wakes and suspends (ideally when monotonic_time starts and stops) so that I can accurately correlate monotonic time-stamps to the realtime clock.
On android the first method that came to mind was to monitor kmsg for a wakeup message and use its timestamp as a fairly accurate mark. As I was unsure of the accuracy of this timestamp, I decided to log the current monotonic time as well.
The following code is running in a standalone executable
while(true)
{
fgets(mLineBuffer, sizeof(mLineBuffer), mKmsgFile);
//Find first space
char * messageContent = strchr(mLineBuffer,' ');
//Offset one to get character after space
messageContent++;
if (strncmp (messageContent,"Enabling non-boot CPUs ...",25) == 0 )
{
clock_gettime(CLOCK_MONOTONIC,&mMono);
std::cout << mLineBuffer;
std::cout << std::to_string(mMono.tv_sec) << "." << std::to_string(mMono.tv_nsec) << "\n";
}
}
I expected the time returned by clock_gettime to be at some point after the kmsg log timestamp, but instead it is anywhere from 600ms before to 200ms after.
<6>[226692.217017] Enabling non-boot CPUs ...
226691.681130889
-0.535886111
<6>[226692.626100] Enabling non-boot CPUs ...
226692.80532881
+0.17922881
<6>[226693.305535] Enabling non-boot CPUs ...
226692.803398747
-0.502136253
During this particular session, CLOCK_MONOTONIC consistently differed from the kmsg timestamp by roughly -500ms, only once flipping over to +179ms over the course of 10 wakeups. During a later session it was consistently off by -200ms.
The same consistent offset is present when monitoring all kmsg entries during normal operation (not suspending or waking). Perhaps returning from suspend occasionally delays my process long enough to produce a timestamp that is ahead of kmsg, resulting in the single +179ms difference.
CLOCK_MONOTONIC_COARSE and CLOCK_MONOTONIC_RAW behave in the same manner.
Is this expected behavior? Does the kernel run on a separate monotonic clock?
Is there any other way to get wakeup/suspend times that correlate to monotonic time?
The ultimate goal is to use this information to help graph the contents of wakeup_sources over time, with a particular focus on activity immediately after waking. Though, if the kmsg timestamps are "incorrect", then the wakeup_sources ones probably are too.

USB keypad - Not getting same scan code in android

I have a USB keypad with 0 to 9,*,#,+,-,CALL,CALLEND keys and I am using it with an Android board.
With the default android Generic.kl file, this keypad provides proper output for each key pressed (checked in a TextBox and this application).
Scan code of these are as below as per application mentioned above,
CALL - META_SHIFT_ON | META_SHIFT_RIGHT_ON - scanCode:48 keyCode:30
KEYCODE_B
ENDCALL - META_SHIFT_ON | META_SHIFT_RIGHT_ON - scanCode:30 keyCode:29
KEYCODE_A
STAR - META_SHIFT_ON | META_SHIFT_RIGHT_ON - scanCode:9 keyCode:15
KEYCODE_8
POUND - META_SHIFT_ON | META_SHIFT_RIGHT_ON - scanCode:4 keyCode:10
KEYCODE_3
I need to remap it, and my custom .kl content is as follows,
key 2 1
key 3 2
key 4 3
key 5 4
key 6 5
key 7 6
key 8 7
key 9 8
key 10 9
key 11 0
key 12 VOLUME_DOWN
key 78 VOLUME_UP
key 30 ENDCALL
key 48 CALL
I have put it in /system/usr/keylayout/
Now with this change, when I am checking the scan code with same test application, I am getting scanCode:54 for the ENDCALL button which was previously 30.
I have following questions from this behavior,
What my understanding is of scan codes is that they are hardware specific and it will provide the same scan code every time whatever the software/host is? I mean scan code for keypad will not change? Which is not happening here.
I have also tried with adding file .kcm in /system/usr/keychars/ but with or without it the behaviour is the same. Do I need to use a .kcm file for it?
Yes, afaik scan codes are specific to firmware residing in hardware. i.e for keypad/keyboard devices scan code will be provided by hardware and they will be unique to distinguish keys.
In android, if you don't want to alter character map of your device, you don't need character map file(.kcm). Job of character map file is to map keycode to human readable character. If you don't provide .kcm file for your device it will use Generic.kcm file for character mapping.
You can also refer link 1 & 2 for more information.

Android Touchscreen IDC

I'm struggling with calibration of a touchscreen on Android plataform.
It is an USB Single-Touch Touchscreen from vendor 0dfc and product 0001 as checked with dmesg:
<6>[ 4118.091541] input: USB Touchscreen 0dfc:0001 as /devices/platform/usb20_host/usb2/2-1/2-1.3/2-1.3:1.0/input/input23
I'm pushing the Vendor_0dfc_Product_0001.idc file /data/system/devices/idc/ (following the documentation from android source - IDC
I got the touch device with all requirements for single touch events:
root#android:/ # getevent -il /dev/input/event3
add device 1: /dev/input/event3
bus: 0003
vendor 0dfc
product 0001
version 0202
name: "USB Touchscreen 0dfc:0001"
location: "usb-usb20_host-1.3/input0"
id: ""
version: 1.0.1
events:
KEY (0001): BTN_TOUCH
ABS (0003): ABS_X : value 540, min 0, max 32767, fuzz 0, flat 0, resolution 0
ABS_Y : value 289, min 0, max 32767, fuzz 0, flat 0, resolution 0
input props:
<none>
I also enabled the Pointer Location option from Developer options (Android settings) in order to debug this stage of calibration.
Setup 1
touch.deviceType = touchScreen
With this setup (1) all the gestures on the touchscreen take place at the up-left corner - just a few pixels left/right/up/down no matter the gesture (swipe). All the touchscreen get events. All the gestures are reversed - when swipe left the pointer goes right; when swipe up, the pointer goes down.
Setup 2
touch.deviceType = pointer
touch.gestureMode = pointer
With this setup (2), as expected, it shows a pointer, placed at the position from the last pointer device left (mouse). All the gestures on the touchscreen (no matter the swipe size) keep beaving like setup 1 - move only a few pixels with each swipe event, and with reversed axis.
Setup 3
touch.deviceType = pointer
touch.gestureMode = spots
With this setup (3) the result is the same as setup 2. I just did that to prove that the IDC file is being interpreted correctly.
At this stage, as you can check by now, I have a working IDC file (setup 1) requiring calibration for this touch device.
I tried a lot of combinations from other IDC files (internet samples) and from android source - IDC - ANY OTHER PROPERTY TOOK EFFECT (NOT A SINGLE ONE) - raw.*, output.*, touch.size.*
Does anyone knows how to calibrate properly a touch screen in Android that could guide me in this process?
Same here,
but my calibration app did't do anything.
After a while, reading /system/etc/init.sh i found the following:
mkdir -p /data/misc/tscal
touch /data/misc/tscal/pointercal
chown 1000.1000 /data/misc/tscal /data/misc/tscal/*
chmod 775 /data/misc/tscal
chmod 664 /data/misc/tscal/pointercal
Just run those commands manually, reboot, and start the calibration app

Categories

Resources