is it possible to hide files in android by transfering/moving them to a location or sector (like in root folders or something) of which other apps don't have access to (via adb or termux or something)?.
I have mentioned adb and termux because i've seen performing actions like uninstalling system apps from device. and if possible, i don't want to root my device.
One humble request: i don't know the ABC's of app building/compiling, maximum i can do is execute commands in adb/termux. so if you paste any code, please also mention what to do with it.
i have tried:
putting dot at starting of the name of the files is too older method and everyone knows about it. encryption and decryption is too much time consuming process. And i don't have that much important data, i just want to hide it from direct access so that most of the people can't find it by normal methods.
Thank you very much
Firstly sorry if the terminology of any of the below is incorrect. I'm new to the Frida/android investigation game and mostly just doing it to improve my understanding at the moment.
I have an APK which I am analysing. I can see through the usual decompilation that certain aspects are obfuscated, others appear to be encrypted and calls appear to disappear into nothing. After analysing the traffic through hooking the crypto libraries I can see it downloads a file starting PK.
To investigate further I've write the zip to disk and extracted it. Inside is a classes.dex file that I believe contains some of the hidden content I've been searching for to close the gaps I keep finding.
Unfortunately, that is where this is currently ending. I can see the new functions and classes but when I try to hook them for manipulation (bypassing emulator detection is a key thing in there) Frida complains as the function/class doesn't exist from the outset. It appears to me that these classes are imported somehow while running, post startup.
What I am looking at doing next is to intercept the download, replacing it with a dex file I modify as required which should then allow me to continue with my analysis but I wondered if there was any easier way? Is it possible to target these classes that are not loaded immediately, and if so can someone point me in the direction I need to look to investigate how? I am presuming that there is a built in android function that needs to do this import I could look for, but I'm unsure where to start.
Any help/direction/guidance would be appreciated. To confirm, I'm wondering if the below flow is possible:
Start the application, hooking crypto libraries
Wait for the new classes.dex file to be downloaded
Hook into these classes on loading
Automagically bypass the newly loaded classes
Thanks.
I'd like to get some numerical data from an app, but they are not stored as files like db. I know there are some memory hack apps for changing in game values although I do not know how they work.
I am looking for similar features but I don't need to change anything.
The app I am trying to write just reads some data from a specific app and do some background calculation based on that. If this is not possible, I would need to get information by reading the screen(for example get pixel color), but this seems to be very cumbersome task for getting many data.
Is there a way of achieving this?
Thanks.
EDIT: I'd assume I would need a root permission for this?
Yes, you would need root permission. Additionally your users must have fully rooted device with e.g. SuperSU or other modern Su app, that can lift most SELinux restrictions. There may also be conflicts with KNOX and other similar systems, but I am not really knowledgeable about those.
You would need to attach your process as debugger to the target application and locate the necessary data by scanning it's memory. This can be done in multiple ways, the best reference implementation to look at can be found in scanmem.
The code, performing the actual deed, which requires root rights, — reading/writing target process memory — would reside in a native executable, being run via su. You'd have to write some code to communicate with that executable (probably via it's stdin/stdout or something like that).
You will also have to write additional code to parse the memory layout of target application yourself.
Alternatively, you may prefer to inject a small module in memory of target application and/or have the app itself load a Dex file of you making (especially handy, if your target data is stored in Java memory). This approach have a benefit of minimizing interaction with memory layout of virtual machine, but you still have to initiate loading of initial Dex file. Once Dex file is loaded, you can do the rest in Java code, using good-old reflection API. If you go with this route, a (decently supported!) code for injecting executable snippets in memory of Linux process can be found in compel library, being developed as part of CRIU project[1].
Two Android processes cannot share memory and communicate with each other directly. So to communicate, objects have to be decomposed into primitives (marshalling) and transfered across process boundaries.
To do this marshalling, one has to write a lot of complicated code, hence Android handles it for us with AIDL (Android Interface Definition Language).
From the OP, as no more details can be found, I would recommend you reading/searching with the keyword "AIDL" and you will be redirected to the concrete solutions.
I found an android app named Super Erase that deletes files and folder permanently from android device so that the file deleted cant be recovered anymore..here is the application i am talking about ...but i was wondering how to that and i know it is made with android studio ..i tried the regular way to delete file.delete() but still the file can be recovered.can i have any help .
For starters, secure file deletion on flash media is a complex problem, with no quick and easy answers. The paper Reliably Erasing Data From Flash-Based Solid State Drives gives a good overview of the problems, the potential solutions, and their limitations. They conclude that
For sanitizing entire disks, ... software techniques work most, but not
all, of the time. We found that none of the available software
techniques for sanitizing individual files were effective. [emphasis added]
NIST 800-88 also has a good overview of the technology trends contributing to the problem, along with some minimum recommendations (appendix A) for Android devices. However they tend to be either whole-disk erasure (factory reset), or rely on cryptographic erasure (CE), rather than being general file erasure methods.
But all is not lost. Even if you can't sanitize individual files, you could hope to wipe all the unallocated space after deleting files. The article Secure Deletion on Log-structured File Systems (Reardon, et al.) describes a fairly promising way to do that in user-mode software. Android's internal memory uses (always?) a log-structured file system.
This paper's "purging" method does not require kernel-level access, and doesn't seem to require any native code on Android. (Note that the term "purging" is used a little differently in documents like NIST 800-88.) The basic idea is to delete all the sensitive data, then fill in the remaining space on the drive with a junk data file, and finally delete the junk data file.
While that takes more time and effort than just overwriting the deleted files themselves (several times in different patterns), it seems to be very robust even when you have to deal with the possibility of wear-leveling and log-structure FS.
Caveat and Further Measures
The main caveat for me is about the conditions mentioned by Reardon et al. in the above paper:
Purging will work for any log-structured file system provided both the
user’s disk quota is unlimited and the file system always performs
garbage collection to reclaim even a single chunk of memory before
declaring that the drive is unwritable. [emphasis mine]
The second condition seems pretty likely to be fulfilled, but I don't know about the first one. Does Android (or some manufacturers' versions of it) enforce quotas on disk space used by apps? I have not found any info about user quotas, but there are quotas for other niches like browser persistent storage. Does Android reserve some space for system use, or for each app's caching, for example, that can't be used for other things? If so, it should help (albeit with no guarantees) if we begin purging immediately after the sensitive files are deleted, so there is little time for other filesystem activity to stake a claim to the recently freed space.
Maybe we could mitigate these risks by cyclical purging:
Determine the remaining space available (call it S) on the relevant partition, e.g. using File.getUsableSpace()
Write a series of files to the partition; each one is, say, 20% of the initial S (subject to file size limits).
When we run out of space, delete the first couple of files that we created, then write another file or two as space allows.
Repeat that last step a few times, until you've reached a threshold you're satisfied with. Maybe up to the point where you've written 2*S worth of filler files; tweak that number to balance speed against thoroughness. How much you actually need to do this would be an area for more research.
Delete the remaining filler files.
The idea with cyclical purging is that if we run out of quota to overwrite all free space, deleting the filler files just written will free up more quota; and then the way log-structured filesystems allocate new blocks should allow us to continue overwriting the remaining blocks of free space in sequence, rather than rewriting the same space again.
I'm implementing this method in a test app, and will post it when it's working.
What about FAT-formatted microSD cards?
Would the same methods work on external storage or microSD cards? FAT is block-structured, so would the purge method apply to FAT-formatted SD cards?
On most contemporary flash memory devices, such as CompactFlash and
Secure Digital cards, [wear leveling] techniques are implemented in
hardware by a built-in microcontroller. On such devices, wear leveling
is transparent and most conventional file systems can be used on them
as-is. (https://en.wikipedia.org/wiki/Wear_leveling)
...which suggests to me that even on a FAT-formatted SD card, wear leveling means that the traditional Gutmann methods would not work (see his "Even Further Epilogue") and that a method like "purging" would be necessary.
Whether purging is sufficient, depends on your security parameters. Wear leveling seems to imply that a block could potentially be "retired" at any time, in which case there is no way to erase it without bypassing the microcontroller's wear leveling. AFAIK this can't be done in software, even if you had kernel privileges; you'd have to design special hardware.
However, "retiring" a bad block should be a fairly rare event relative to the life of the media, so for many scenarios, a purging method would be secure enough.
Erasing the traces
Note that Gutmann's method has an important strength, namely, to erase possible traces of old data on the storage media that could remain even after a block is overwritten with new data. These traces could theoretically be read by a determined attacker with lots of resources. A truly thorough approach to secure deletion would augment a method like Gutmann's with purging, rather than replacing it.
However, on log-structured and wear-leveled filesystems, the much bigger problem is trying to ensure that the sensitive blocks get overwritten at all.
Do existing apps use these methods?
I don't have any inside information about apps in the app store, but looking at reviews for apps like iShredder would suggest that at best, they use methods like Reardon's "purging." For example, they can take several hours to do a single-pass wipe of 32GB of free space.
Also note limitations: The reviews on some of the secure deletion apps say that in some cases, the "deleted" files were still accessible after running the "secure delete" operation. Of course we take these reviews with a grain of salt -- there is a possibility of user error. Nevertheless, I wouldn't assume these apps are effective, without good testing.
iShredder 4 Enterprise helpfully names some of the algorithms they use, in their app description:
Depending on the edition, the iShredder™ package comes with deletion
algorithms such as DoD 5220.22-M E, US Air Force (AFSSI-5020), US Army
AR380-19, DoD 5220.22-M ECE, BSI/VS-ITR TL-03423 Standard,
BSI-VS-2011, NATO Standard, Gutmann, HMG InfoSec No.5, DoD 5220.22 SSD
and others.
This impressive-sounding list gives us some pointers for further research. It's not clear how these methods are used -- singly or in combination -- and in particular whether any of them are represented as being effective on their own. We know that Gutmann's method would not be. Similarly, DoD 5220.22-M, AFSSI-5020, AR380-19, and Infosec No. 5 specify Gutmann-like procedures for overwriting sectors on hard drives, which would not be effective for flash-based media. In fact, "The U.S. Department of Defense no longer references DoD 5220.22-M as a method for secure HDD erasure", let alone for flash-based media, so this reference is misleading to the uninformed. (The DoD is said to reference NIST 800-88 instead.) "DoD 5220.22 SSD" sounds promising, but I can't find any informative references for it. I haven't chased down the other algorithms listed, but the results so far are not encouraging.
When you delete file with standard methods like file.delete() or runtime.exec("rm -f my_file") the only job that kernel does is removing info about file from auxiliary filesystem structures. But storage sectors that contain actual data remain untouched. And because of this recovering is possible.
This gives an idea about how we can try to remove file entirely - we should erase all sectors somehow. Easiest approach is to rewrite all file content with random data few times. After each pass we must flush file buffers to ensure that new content is written to storage. All existing methods of secure file removal spin around above principle. For example this one. Note that there is no universal method that works well across all storage types and filesystems. I guess you should experiment by yourself and try to implement some of the existing approaches or design your own. E.g. you can start from next:
Overwrite and flush file 10 times with random data (use FileOutputStream methods). Note!!! Don't use zeros or another low entropy data. Some filesystems may optimize such sparse files and leave some sectors with original content. You can use /dev/urandom file as source of random data (this is a virtual file and it is endless). It gives better results and works faster then well-known Random class.
Rename and move file 10 times. Choose new file names randomly.
Then truncate file with FileChannel.truncate().
And finally remove file with File.delete().
Of course you can write all logic in native code, it may be even somewhat easier than in Java. Described algorithm is just an example. Try doing in that way.
The standard filesystem API won't give you a simple function call for that.
You will have to use the underlaying native API for FileIO. Although I have never used it, theres a library for that:
https://github.com/johanneslumpe/react-native-fs
There are two answers to this question.
First, to answer the direct question of how some of these apps might be doing secure single file delete: what you do is actually open the file and replace the contents with zeros many times. The method sounds stupid, but I have worked with filesystem-level encryption on Android in the past and I found that the above holds true for many secure file delete solutions out there. For a seemingly compliant security, you can repeat writing zeros 7 times (or whatever the NIST standards specify for your hardware type).
Charset charset = StandardCharsets.UTF_8;
String content = new String(Files.readAllBytes(path), charset);
content = content.replaceAll("*", "0");
Files.write(path, content.getBytes(charset));
The right answer to this question is however different. On modern SSD drives and operating systems, it is insecure to delete single files. Therefore, these apps don't really offer a compelling product. Modern operating systems store fragments of the file in different places, and it is possible that even after you have zeroed out the most recent version of the file block-by-block and also overwrote all metadata, that a fragment from an older version of the file might be left over in another part of the drive.
The only secure way to delete sensitive content from a disk is to zero out the entire disk multiple times before discarding the disk.
#LarsH's answer about wiping all unallocated space after deleting files is compelling, but perhaps impractical. If you simply want to secure delete files so no one can scan the disk to recover it, then a better solution is the full-disk encryption. This was in-fact the entire appeal of full-disk encryption. This is why Apple stopped supporting secure file delete in their Mac OSX and iOS, and switched to full-disk encryption as default on all iPhones. Android phones have full-disk encryption as well now.
EDIT:
If you are looking for a true solution for a customer, your best bet is to use single file encryption. Once you destroy your key which only your app would know, there is no way to decrypt the file even if someone found it on the disk.
There exists no real solution for deleting files securely on SSDs. You can only give a false sense of security to non-technical people who still remember the old HDD days.
I am working on an android project which needs to create and write files rapidly. I am using ndk for this purpose and found that fopen() call takes uncertain amount of time, from minimum ~30ms to several seconds whening running from the main thread. After opening the file, I then need to compute some results, store results into the opened file and then close it.
I am trying to put it into another thread but not sure if it helps at all and how to handle scheduling issue if it does. I am also thinking about possibly opening many of those file descriptors at the beginning of the application and maintain a pool of those through the applcation. Anyone helping to point to the right direction?
It sounds like you are trying to go very low level.
Have you considered using the open() write() and close() System calls. The c-library fopen calls do some nice things for you such as buffering, but the system calls are likely to be faster. You will have to profile, but I think you will see lower latency.
int fd = open("myfilepath",O_WRONLY|O_CREAT);
write(fd,myData, myDataSize);
close(fd);
You will find more info here.
http://www.tutorialspoint.com/unix_system_calls/open.htm