Using ExoPlayer Input stream with own encryption logic - android

I am now trying to play an encrypted video(mp4) complete with my own logic. It takes too much time to play back the decoded file because it is too large to create and play. So, what I have found is how to play it while decrypting it with InputStream using ExoPlayer. But it's too difficult at my level to apply it. When I was worried for two days, I had a night, but I still do not see any results. So I ask for help here.
What I am looking for is a reference that can be helpful. I must accept and decode the buffer size (4096). I do not know where to write this code.
And the flow to complete the function I think is as follows.
1. Complete the ExoPlayer UI.
2. Encrypt the downloaded file using my encryption logic. (buffer size is 4096)
3. InputStream receives the file, decodes it at the same time, and plays it. (streaming)
I will do it somehow until 1 and 2, but 3 is very difficult for me. Do you have any specific code and explanation? If you know anyone, please give me a favor. Thank you.
try {
ios = new FileInputStream(params[0]);
fos = context.openFileOutput(params[1] + ".mp4", MODE_PRIVATE);
ScatteringByteChannel sbc = ios.getChannel();
GatheringByteChannel gbc = fos.getChannel();
File file = new File(params[0]);
fileLength = file.length();
startTime = System.currentTimeMillis();
int read = 0;
readb = 0;
ByteBuffer bb = ByteBuffer.allocate(4096);
while ((read = sbc.read(bb)) != -1) {
bb.flip();
gbc.write(ByteBuffer.wrap(enDecryptVideo.combineByteArray(bb.array())));
bb.clear();
readb += read;
if (readb % (4096 * 1024 * 3) == 0){
publishProgress(((int) ( readb * 100 / fileLength)));
} else if (readb == fileLength) {
publishProgress(101);
}
}
ios.close();
fos.close();
} catch (Exception e) {
e.getMessage();
} finally {
Log.d(TAG, "doInBackground: " + (System.currentTimeMillis() - startTime));
}
This is my code when I use File play. The above code is the code I used when I made a decoded file and played it. Now I have to play back at the same time as decoding. It does not create a file. I am very eager. Because I have been working for a month since I started work, but I have received something that does not fit my level. But I really want to hit this target... Teach me please.

You can actually leverage the platforms inbuilt encryption functionality for streamed video, either using a commercial DRM or using a 'clear key' encryption.
If these meet your needs it should much easier to work with as you won't have to implement the encryption and decryption yourself.
This answer provides an example for creating both an HLS / AES stream and a DASH clearkey stream:
https://stackoverflow.com/a/45103073/334402
This does not provide the same security as DRM, as the keys themselves are not encrypted, but it may be sufficient for your needs.
These streams can then be played with the standard iOS, Android or HTML5 players.

Related

How do I export Point Cloud Data (Project Tango)?

Just got a Project Tango Development Kit tablet and have worked through some of the demos and examples.
Some older blog posts use the log files from a "Tango Mapper" application that should be preloaded on the device.
Interactive Visualization of Google Project Tango Data with ParaView
Ologic Announces integration between ROS and Project Tango
Google Tango and ROS integration at Bosch
Mapping Hints and Tips
Unfortunately, the "Tango Mapper" application did not come preloaded on my device and I can't seem to find it on the Play Store.
Is there some other method to simply export or retrieve the PointCloud data for downstream rendering?
[Model number: yellowstone, Tango Core Version: 1.1:2014.11.14-bernoulli-release]
Not sure if you ever got to solve this, but I was able to find the APK along with a method to export using Tango updated tablet version. I successfully exported the point cloud data using the method described in this blog.
http://www.kitware.com/blog/home/post/838
Edit
Procedure download the APK or use the source code found found in the GITHUB project folder.
Once that is done boot up the app as you normally would. There will a slider record, and auto. If you slide record it will only wait until you hit the snap shot button to record the point cloud data you are currently viewing.
If you slide the auto it will continuously record the point cloud data and create files as it tracks where you are moving. Keep in mind the larger the file the larger it takes to save as a zip.
Once done slide the record and it will prompt you to save and send.
I find it easier to save to the Google Drive as other the other methods sometimes fail to send.
Once done download the free Paraview App found http://www.paraview.org/download/ load up your Point cloud data.
It should be two files one your pose data and the other point cloud. (you could individually load each data using the collapse arrow you see before importing it in.)
That will be it you will be able to see your data and actually play back the animation of you recording it because of your pose data collected.
( only wrote this out because you were looking for an easier way to export data) This is probably the easiest. You could take said data and begin to reconstructed the room based on the pose data collected.)
all credit for source code and tutorial goes to the The Kitware blog
If links are broken DM me and I will send the file to you.
APK is found here
APK DOWNLOAD
they also have listed their source code at the bottom of the blog. It is based on the tango Explorer found in the app store.
Tango Mapper is an internal tool, and it's currently not public to developers. I think the best way to log the point cloud data is using the c or java example code provided, and maybe do some small modification to log the data to a file.
c example: https://github.com/googlesamples/tango-examples-c
java example: https://github.com/googlesamples/tango-examples-java
Sparse mapping: https://www.youtube.com/watch?v=x5C_HNnW_3Q
More indoor mapping: https://www.youtube.com/watch?v=3BNOsxMZD14
It appears that more than a few of the contributors to the Tango project were hired or bought by google. As an example most of the links to code and/or articles by Hidof are MIA, only a facebook page with few clues remains. The internet archive's wayback machine has a few snapshots of their website for the curious.
Go take a look at the Java Point Cloud sample on GitHub - The function you want to look at is onXyzIsAvailable in PointCloudActivity. Extracting a few relevant lines....
public void onXyzIjAvailable(final TangoXyzIjData xyzIj) {
....
byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
xyzIj.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer,
xyzIj.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
At this point buffer contains the point cloud data - I would strongly recommend you ship this off the device via a binary service call, as I think making the poor thing try and convert it to JSON or XML would make things slower than you would like
Thank you Mark for your advice. I am a novice programmer and it is my first time working with java...
I am interested in exporting the Tango acquired PointCloud data to a file and I would like to ask for your feedback on my approach (I created a Save button, and onClick the data would be saved to a file on an external drive). Please find the code bellow for the part that should save the xyzIj data:
#Override
public void onClick(View v) {
switch (v.getId()) {
...
case R.id.save_button:
savePointCloud();
break;
default:
Log.w(TAG, "Unrecognized button click.");
}
}
private static void savePointCloud(final TangoXyzIjData xyzIj, String file) {
File directoryName = getAlbumStorageDir(file);
FileOutputStream out = new FileOutputStream(directoryName,"text.txt");
byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
xyzIj.xyzParcelFileDescriptor.getFileDescriptor());
int read;
while ((read=fileStream.read(buffer))!=1){
try{
out.write(buffer, 0, read);
out.close();
System.out.println("Printed to file");
}catch(IOException e){e.printStackTrace();}
}
}
public File getAlbumStorageDir(String dirName) {
if (!isExternalStorageWritable()) {
return null;
} else {
// Get the directory for the user's public downloads directory.
File file = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_DOWNLOADS), dirName);
if (!file.mkdirs() || !file.exists()) {
Log.e(TAG, "Directory not created");
return null;
}
return file;
}
}
public boolean isExternalStorageWritable() {
String state = Environment.getExternalStorageState();
if ((Environment.MEDIA_MOUNTED.equals(state)
&& Environment.MEDIA_MOUNTED_READ_ONLY.equals(state))) {
return true;
} else {
Log.e(TAG, "External storage is not mounted READ/WRITE.");
return false;
}
}

Android MediaPlayer setDataSource iOS alternative

I have android implementation which sets media player data source range between 10000 bytes and 40000000 bytes:
FileInputStream fis = new FileInputStream(Environment.getExternalStorageDirectory() + "path to video file");
mMediaPlayer.setDataSource(fis.getFD(),1000,40000000);
Is there any alternative to do the same in iOS?
I don't think you can set the range based on bytes in iOS. But you can specify the video's playback range based on time. An example is given in this link.
You have to calculate it manually by the combination of the two:
// 1 Get AVAsset file size
NSDictionary *fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:URL error:&attributesError];
NSNumber *fileSizeNumber = [fileAttributes objectForKey:NSFileSize];
long long fileSize = [fileSizeNumber longLongValue];
// 2 now from the estimated data rate
Get Estimated data rate
so now you can have the CMTime and you can pass it accordingly.
Hope this helps.

Decrypt aes-file in iOS encrypted using a tool from AESCrypt.com

I'm working on an app that among other things plays sound files. My problem is the sound files are encrypted with a command line tool that can be found on aescrypt.com just providing the file and a password. I have used the java-code on aescrypt.com to successfully decrypt the files in the android app but I can't for the life of me not get it to work in iOS.
I have tried to decrypt all the bytes of the file and the bytes that does not include the header of the file. I get a result set of bytes back but it won't play and the estimated length of the sound is about one fourth of the actual length.
NSRange range = NSMakeRange(0, self.length);
unsigned char* encrypteddata = malloc(range.length);
[self getBytes:encrypteddata range:range];
size_t outSize;
unsigned char* result = malloc(range.length + 16);
CCCryptorStatus status = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, 0x00, decryptkey, sizeof(decryptkey), nil, encrypteddata, self.length, result, self.length + 16, &outSize);
NSData *returnData = nil;
if (status == kCCSuccess) {
returnData = [NSData dataWithBytesNoCopy:result length:outSize];
}
The decryptkey is just the bytes from the password used to encrypt the file.
I have been working on a solution for at least a week now and not made any progress. There are so many things that can be done wrong and so many possible (and impossible) combinations.
Update:
What I need is a tool that is simple enough for our customer to use to encrypt the sound files on their end and also is simple for the apps both on Android and iOS to decrypt on the other end. It does not need to be very secure, it only needs to prevent the common user on Android from just opening and play the file from disc. If aescrypt.com tools isn't optimal for this, I gladly welcome other suggestions.
Why do you stick to that AESCrypt applications? They write their custom header to the encrypted file.
They distribute source code, which will give you enough information about how to decrypt this (and, probably you'll be able to re-use their sources). Check their AESCryptWorkerThreads.cpp in AES crypto source code.
The AESCrypt format carries significant configuration information. Most of the format is detailed at their site (aes_file_format.html). This page doesn't explain their custom KDF, unfortunately. You'll have to use or reverse-engineer their encrypt_stream function in aescrypt.c:
// Hash the IV and password 8192 times
memset(digest, 0, 32);
memcpy(digest, IV, 16);
for(i=0; i<8192; i++)
{
sha256_starts( &sha_ctx);
sha256_update( &sha_ctx, digest, 32);
sha256_update( &sha_ctx,
(unsigned char*)passwd,
(unsigned long)passlen);
sha256_finish( &sha_ctx,
digest);
}
They don't use CommonCryptor, so if you want hardware-optimized code, you'll have to reimplement this format yourself in CommonCryptor.
Note that your decrypt code has no IV, no KDF, and no HMAC, so anything that actually encrypted that way would be highly insecure. AESCrypt does provide a proper IV and HMAC, and its KDF is likely secure, though non-standard, so it is a reasonable choice.

Streaming video from MediaRecorder through LocalSocket

I'm trying to send h264/AAC video from Android's MediaRecorder through a local Socket. The goal is to to send video to a WOWZA server throught RTMP or RTSP, but it's giving me a lot of trouble and for now I'm just trying to write the data to a file from the LocalServerSocket.
Here is some code. Sorry it's not really clean, but I spent hours testing many things and my project is a mess right now.
In the Camera activity, the output file setup:
LocalSocket outSocket = new LocalSocket();
try {
outSocket.connect(new LocalSocketAddress(LOCAL_SOCKET));
} catch (Exception e) {
Log.i(LOG_TAG, "Error connecting socket: "+e);
}
mMediaRecorder.setOutputFile(outSocket.getFileDescriptor());
The LocalServerSocket implementation:
try {
mLocalServerSocket = new LocalServerSocket(mName);
} catch (Exception e) {
Log.e(LOG_TAG, "Error creating server socket: "+e);
return;
}
while (true) {
File out = null;
FileOutputStream fop = null;
try {
mLocalClientSocket = mLocalServerSocket.accept();
InputStream in = mLocalClientSocket.getInputStream();
out = new File(mContext.getExternalFilesDir(null), "testfile.mp4");
fop = new FileOutputStream(out);
int len = 0;
byte[] buffer = new byte[1024];
while ((len = in.read(buffer)) >= 0) {
Log.i(LOG_TAG, "Writing "+len+" bytes");
fop.write(buffer, 0, len);
}
} catch (Exception e) {
e.printStackTrace();
}
finally{
try {
fop.close();
mLocalClientSocket.close();
} catch (Exception e2) {}
}
}
The problem is that the file resulting from this is not readable by any media player. Do you think this is because of an encoding issue? This code should generate a binary file if I understand well?!
Thanks in advance, cheers.
Ok, I've found why the files couldn't play. In MP4 and 3GPP files, there is a header containing the bytes:
ftyp3gp4 3gp43gp6 wide mdat
in HEX
0000001866747970336770340000030033677034336770360000000877696465000392D86D6461740000
The 4 bytes before the 'mdat' tag represent the position of another 'moov' tag situated at the end of the file. The position is usually set when the recording is over, but as MediaRecorder can't seek sockets, it can't set these bytes to the correct value in our case.
My problem now is to find a way to make such a file streamable, as it involves for it to be played before the recording is over.
You could try using mp4box to restructure your file. The moov box gives the indexes for each audio and video sample. If that is at the end of the file, it makes it difficult to stream.
This might help:
http://boliston.wordpress.com/tag/moov-box/
Or this:
mp4box -inter 0.5 some_file.mp4
(I don't have the chance to try currently)
If you need this to work with your app, I am not aware of any activities to port mp4box to Android.
I tried today to do the same, but mp4 is not very easy to stream (as said before some parts are written at the end). I don't say it's impossible but it seems at least quite hard.
So a workaround for newer Android APIs (4.3) could be this one:
Set the camera preview to a SurfaceTexture: camera.setPreviewTexture
Record this texture using OpenGL and MediaCodex + Muxer
The drawback of this solution is that the preview size of a camera might be smaller than the video size. This means depending on you device you can't record at the highest resolution. Hint: some cameras say they don't support higher preview sizes but they do and you can try to configure the camera to set the preview size to the video size. If you do so catch the RuntimeException of camera.setParameters and if it fails only use the supported preview sizes.
Some links how to record from a SurfaceTexture:
Bigflage: great examples for MediaCodec stuff.
The VideoRecorder class from Lablet.
May also be useful: spydroid-ipcamera streams the data from the MediaRecorder socket as RTP streams but I have found no way to feed that to the MediaCodec. (I already got stuck reading the correct NAL unit sizes as they do...)

How to merge two mp3 files into one (combine/join)

Can any tell how to combine/merge two media files into one ?
i found a topics about audioInputStream but now it's not supported in android, and all code for java .
And on StackOverflow i found this link here
but there i can't find solution - these links only on streaming audio . Any one can tell me ?
P.S and why i can't start bounty ?:(
import java.io.*;
public class TwoFiles
{
public static void main(String args[]) throws IOException
{
FileInputStream fistream1 = new FileInputStream("C:\\Temp\\1.mp3"); // first source file
FileInputStream fistream2 = new FileInputStream("C:\\Temp\\2.mp3");//second source file
SequenceInputStream sistream = new SequenceInputStream(fistream1, fistream2);
FileOutputStream fostream = new FileOutputStream("C:\\Temp\\final.mp3");//destinationfile
int temp;
while( ( temp = sistream.read() ) != -1)
{
// System.out.print( (char) temp ); // to print at DOS prompt
fostream.write(temp); // to write to file
}
fostream.close();
sistream.close();
fistream1.close();
fistream2.close();
}
}
Consider two cases for .mp3 files:
Files with same sampling frequency and number of channels
In this case, we can just append the second file to end of first file. This can be achieved using File classes available on Android.
Files with different sampling frequency or number of channels.
In this case, one of the clips has to be re-encoded to ensure both files have same sampling frequency and number of channels. To do this, we would need to decode MP3, get PCM samples,process it to change sampling frequency and then re-encode to MP3. From what I know, android does not have transcode or reencode APIs. One option is to use external library like lame/FFMPEG via JNI for re-encode.

Categories

Resources