ART relocate the boot.art & boot.oat under /system/framework/arm/ to /data/dalvik-cache/arm/ by default.
Why the art likes to use the cached one rather than the system one?
Why not just use the boot image under /system?
I guess, in Android L & M, the boot.art and boot.oat use the absolute address to load boot.art & boot.oat. But, sometime, ART can't load them at special base address, so, ART load them by another base address. Therefore ART must relocate the address of classes in boot.art.
See the code: ./runtime/gc/space/image_space.cc RelocateImage
std::string base_offset_arg("--base-offset-delta=");
StringAppendF(&base_offset_arg, "%d", hooseRelocationOffsetDelta(ART_BASE_ADDRESS_MIN_DELTA, ART_BASE_ADDRESS_MAX_DELTA));
the base-offset-delta is used to calculate the new address of classes in boot.art.
Related
I am developing an application using Android Opencv.
This app, which I am developing, offers two operations.
The frame read from the camera is passed to Jni using native function
Mat.getNativeObjAddr (), and the new image is returned through
javaCameraView's onCameraFrame() function
It reads a video clip inside Storage, processes each frame the same
as # 1, and returns the resulting image via the onCameraFrame()
function.
First,function is implemented as simple as the following and works normally:
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame)
{
if(inputFrame!=null){
Detect(inputFrame.rgba().getNativeObjAddr(), boardImage.getNativeObjAddr());
}
return boardImage;
}
}
However, the problem occurred in the second operation.
As far as I know, the files inside Java Storage are not readable by jni.
I already tried FFmpegMediaPlayer or MediaMetadataRetriever through Google search. However, the getFrameAtTime () function provided by this MetadataRetriever took an average of 170ms when grabbing a bitmap to a specific frame of 1920 * 1080 image. What I have to develop is to show the video results in real time at 30 fps. In # 1, the native function Detect () takes about 2ms to process one frame.
For these reasons, I want to do this.
java sends a video's path (eg : /storage/emulated/0/download/video.mp4) to jni, and native functions process the video one frame at a time, and display the result image on 'onCameraFrame'.
Is there a proper way? I look forward to your reply. Thank you!
I'm trying to hook on a 64 bit ARM Android device the SSL_do_handshake function available in libssl.so, until now I have:
address of libssl.so (found in proc//maps)
offset of SSL_do_handshake(found in libssl.so)
absolute address of SSL_do_handshake (address_of_libssl + offset_of_SSL_do_handshake)
What I'm doing is save the instructions at absolute address of libssl.so (for recovery) and overwrite them with my instructions (for jump):
LDR X9,#8
Br X9
address for jump 7f75a5b890
In detail what I'm saving in memory is 49000058 20011fd6 90b8a5757f if I look in memory after the overwriting I found my instructions and the address. If I try to execute it and call SSL_do_hanshake I have a crash before the jump.
I did the same for ARMv7 using thumb32 with the same approach but different instructions:
ldr pc [pc,#0]
address for jump (f36e240d)
In memory dff800f0 0d246ef3 and it works.
Any idea of what I'm doing wrong?
What file type is being used to embed the images in AndroidEmoji-htc.ttf? direct download: AndroidEmoji-htc.ttf
Images can be extracted from AppleColorEmoji.ttf easily because the PNG headers can be found using a hex editor. This ruby script can extract them. The algorithm is described here.
Sample of file in hex editor:
0706 2627 2626 2726 2627 2626 2727 2727 ..&'&&'&&'&&''''
2726 2627 2626 2726 2627 2636 3736 3637 '&&'&&'&&'&67667
3636 3736 3637 3737 3737 3636 3736 3637 6676677777667667
3636 0131 3636 3736 3635 3527 2626 3132 66.16676655'&&12
2627 2626 2726 2627 2721 2115 1533 3232 &'&&'&&''!!..322
1716 1617 1634 1110 0607 0606 0706 0623 .....4.........#
2315 1533 3335 3523 2226 2726 2627 3434 #..3355#"&'&&'44
3535 3332 3233 1616 1716 1617 1616 1716 553223..........
1617 1616 1716 1617 1616 1716 3237 3236 ............2726
3737 3736 3637 3636 3737 2323 0706 0607 7776676677##....
0606 0706 0627 2226 2726 2627 2626 2726 .....'"&'&&'&&'&
2627 2626 3130 2627 2626 2726 2627 2626 &'&&10&'&&'&&'&&
2727 3535 3736 3637 3636 0131 3232 3332 ''55766766.12232
1617 1616 1716 3017 1616 1516 0607 0606 ......0.........
0706 0607 0606 2323 3534 3437 3636 3736 ......##54476676
3608 bf05 020b 1c2a 0d09 0b0d 66a4 b72c 6......*....f..,
0202 0233 8a8d 9c9c 8d88 3302 0202 022a ...3......3....*
9a8d 631a 3c05 090b 3e1a 1a3e 0b09 0b0d ..c.<...>..>....
65a5 b918 1616 1d67 6572 4028 2a0f 251f e......ger#(*.%.
7365 691b 1814 1a98 8d65 183b 0409 0d2c sei......e.;...,
1a0b 0702 fb8b 1e1d 423e 3f44 2a0d 3a3f ........B>?D*.:?
034f 4f42 4435 3203 1e1b 040d 140d 0704 .OOBD52.........
0303 040d 1b0d 041b 1e03 3235 4442 4f4f ..........25DBOO
033f 3a0d 2a55 5265 1f21 3d40 4044 2a0d .?:.*URe.!=##D*.
3940 024d 4f44 4235 3502 1d1c 050d 1a0d 9#.MODB55.......
0502 0207 040d 140d 051c 1d02 3535 4244 ............55BD
4f4d 0240 390d 2a56 5301 c806 0d0b 2611 OM.#9.*VS.....&.
0b06 0503 120b 122e 1835 0837 33fe d9fe .........5.73...
dc2e 2c02 0b12 0407 0b02 0306 071a 1006 ..,.............
2a23 f8fb 2a30 070f 1d0d 022a 1818 0914 *#..*0.....*....
Update 6/18/2014:
At #naXa 's suggestion, opening the file in FontForge version 20120731-ML (current newest version) gave this error:
The following table(s) in the font have been ignored by FontForge
Ignoring 'dcmj'
In GID1 the advance width (2252) is greater than the stated maximum (2048)
Subsequent errors will not be reported.
Bad lookup table: format=6, first=65535 total glyphs in font=894
Somewhat expected because emojis in TTFs to this day are encoded proprietarily. The fact that I even see black and white emoji images using FontForge is a huge success because it means the TTF is standard for the most part. TTFs are not supposed to store color information I don't think.
They key is probably accessing the data in the dcmj table or wherever it is pointing to. Researching FontForge I found that BMP is a common image format for TTFs so I'm going to try and modify the ruby script using those assumptions and report back!
Update: 6/18/14
I found what appear to be BMP headers source1 source2, starting with 424D using a hex editor but the header doesn't seem valid.
Next I would try:
Parsing the TTF look at the data in each "glyph" to see if I can find more patterns. I imagine the ttf will say the start end end of the image data.
Look into the htc android apk to see how they are pulling and displaying emoji from the ttf.
I've run out of time on this for now, if anyone has any other suggestions I'm very interested.
UPDATE 6/20/2014
Double clicking on the glyph using #naXa's suggestion and exporting as any of the formats will give me non color icons of any size but still does not reveal the color bitmap emojis I was looking for.
I went down to the store to look at an HTC phone finally and saw, to my surprise, they are using Apple's emoji font seen through the messaging app:
I am almost certain these are stored in the HTC font provided above, but this conclusion has left extracting these images far less desirable.
Howver, it would still be cool to know, as a proof of concept, how to extract the color emojis. :)
EDIT: As Jasper pointed out, HTC does in fact have a custom emoji set as linked in his answer. The picture above was from a non updated phone. Still need to figure out how to extract these emojis!!
Unfortunately, I don't have an account, so let me start by apologizing for posting this as an answer instead of as a comment.
The picture you posted showing the Apple Color Emojis appear to come from a phone running an older version of Sense/Android, while the file you are referencing almost definitely comes from Sense 5-6/Android 4.3-4.4 If you look at the grayscale emojis you were able to extract from the file, you'll notice that they don't actually match up with the picture you provided. They do, however, match up with this: http://assets.hardwarezone.com/img/2013/10/HTC_One_Max_Emoticons_Keyboard_jpg.jpg
This leads me to conclude that it could be entirely possible that there are no conventional bitmaps stored in the TTF, rather there's some proprietary format that they use to assign colors to different parts of each emoji.
EDIT: Tried directly copying the file over to my phone to see what would happen (tried both replacing NotoColorFont.ttf as well as just directly copying and referencing it in fallback_fonts.xml, there doesn't seem to be any difference). Screenshot here: http://imgur.com/OGyq6T2
As you can see, they show up without color, yet we already know that both the default Android emoji and Apple Color Emoji both show up fine on Android devices, meaning that HTC doesn't follow whatever standard that Android and iOS use.
Tested on a Galaxy SII (i9100) running CyanogenMod 11 Milestone 8.
The emoji images in the AndroidEmoji-htc.ttf file are probably (since I don't have the font to test) stored in the same format as the standard Android emoji in Google's CBLC+CBDT OpenType tables.
You can disassemble/reassemble the font using ttx from FontTools(pypi, github) to confirm.
The direct answer to your question "What is the format?" is two options:
Uncompressed Color Bitmaps
The value ‘32’ of the bitDepth field of bitmapSizeTable struct defined in the CBLC table, to identify color bitmaps with 8-bit blue/green/red/alpha channels per pixel, encoded in that order for each pixel (referred to as BGRA from hereon). The color channels represent pre-multiplied color and are encode colors in the sRGB colorspace. For example, the color “full-green with half translucency” is encoded as \x00\x80\x00\x80, and not \x00\xFF\x00\x80.
All imageFormat values defined in the EBDT / EBLC tables are valid for use with the CBDT / CBLC tables.
Compressed Color Bitmaps
Images for each individual glyph are stored as straight PNG data. Only the following chunks are allowed in such PNG data: IHDR, PLTE, tRNS, sRGB, IDAT, and IEND. If other chunks are present, the behavior is undefined. The image data shall be in the sRGB colorspace, regardless of color information that may be present in other chunks in the PNG data. The individual images must have the same size as expected by the table in the bitmap metrics.
As I know, Emojis are stored in two different placfes - in .ttf - to display in text-only fields (for example, quick previews of message) and in images. Maybe you should dig into that way?
Haven't looked at Android emoji but I managed to extract out the iOS emoji
by jacking a few tools to do it as nothing on the net seems to do it 100%.
Hex Editor is key its all I used...
iOS 5.0 used uint8 type RGBA data stored as tuples
iOS 5.1 changed to pngs and these are written contiguously
iOS 6 combined both the iOS 5.0 & 5.1 format. Set 1 & 2 were uint8 type data & Set 3 (ipad # 96x96px) were optimised png format that Apple adopt e.g switching RGBA to BGRA...byte blitting apparently...
iOS 7 stayed the same as did iOS 8 to 8.2.
Hope that helps...
I want to capture the audio wave frame from the audio buffer, I found android.media.audiofx.Visualizer can do such thing, but it can only returns partial and low quality audio content
I found android.media.audiofx.Visualizer will call to the function Visualizer_command(VISUALIZER_CMD_CAPTURE) at android4.0\frameworks\base\media\libeffects\visualizer
I found the function Visualizer_process will make the audio content to low quality. I want to rewrite the Visualizer_process , and want to find who will call Visualizer_process, but I cannot find the caller from Android source code, can anyone help me ?
thanks very much!
The AudioFlinger::PlaybackThread::threadLoop calls AudioFlinger::EffectChain::process_l, which calls AudioFlinger::EffectModule::process, which finally calls the actual effect's process function.
As you can see in AudioFlinger::EffectModule::process, there's the call
int ret = (*mEffectInterface)->process(mEffectInterface,
&mConfig.inputCfg.buffer,
&mConfig.outputCfg.buffer);
mEffectInterface is an effect_handle_t, which is an effect_interface_s**. The effect_interface_s struct (defined here) contains a number of function pointers (process, command, ...). These are filled out with pointers the actual effect's functions when the effect is loaded. The effects provide these pointers through a struct (in EffectVisualizer it's gVisualizerInterface).
Note that the exact location of these functions may differ between different Android releases. So if you're looking at Android 4.0 you might find some of them in AudioFlinger.cpp (or somewhere else).
We ( http://www.mosync.com ) have compiled our ARM recompiler with the Android NDK which takes our internal byte code and generates ARM machine code. When executing recompiled code we see an enormous increase in performance, with one small exception, we can't use any Java Bitmap operations.
The native system uses a function which takes care of all the calls to the Java side which the recompiled code is calling. On the Java (Dalvik) side we then have bindings to Android features. There are no problems while recompiling the code or when executing the machine code. The exact same source code works on Symbian and Windows Mobile 6.x so the recompiler seems to generate correct ARM machine code.
Like I said, the problem we have is that we can't use Java Bitmap objects. We have verified that the parameters which are sent from the Java code is correct, and we have tried following the execution down in Android's own JNI systems. The problem is that we get an UnsupportedOperationException with "size must fit in 32 bits.". The problem seems consistent on Android 1.5 to 2.3. We haven't tried the recompiler on any Android 3 devices.
Is this a bug which other people have encountered, I guess other developers have done similar things.
I found the message in dalvik_system_VMRuntime.c:
/*
* public native boolean trackExternalAllocation(long size)
*
* Asks the VM if <size> bytes can be allocated in an external heap.
* This information may be used to limit the amount of memory available
* to Dalvik threads. Returns false if the VM would rather that the caller
* did not allocate that much memory. If the call returns false, the VM
* will not update its internal counts.
*/
static void Dalvik_dalvik_system_VMRuntime_trackExternalAllocation(
const u4* args, JValue* pResult)
{
s8 longSize = GET_ARG_LONG(args, 1);
/* Fit in 32 bits. */
if (longSize < 0) {
dvmThrowException("Ljava/lang/IllegalArgumentException;",
"size must be positive");
RETURN_VOID();
} else if (longSize > INT_MAX) {
dvmThrowException("Ljava/lang/UnsupportedOperationException;",
"size must fit in 32 bits");
RETURN_VOID();
}
RETURN_BOOLEAN(dvmTrackExternalAllocation((size_t)longSize));
}
This method is called, for example, from GraphicsJNI::setJavaPixelRef:
size_t size = size64.get32();
jlong jsize = size; // the VM wants longs for the size
if (reportSizeToVM) {
// SkDebugf("-------------- inform VM we've allocated %d bytes\n", size);
bool r = env->CallBooleanMethod(gVMRuntime_singleton,
gVMRuntime_trackExternalAllocationMethodID,
jsize);
I would say it seems that the code you're calling is trying to allocate a too big size. If you show the actual Java call which fails and values of all the arguments that you pass to it, it might be easier to find the reason.
I managed to find a work-around. When I wrap all the Bitmap.createBitmap calls inside a Activity.runOnUiThread() It works.