Customizing exoplayer quality dialog in my app - android

I have video player in my android app and I make them using exoplayer library. My player can to play .m3u8 videos (I obtain them from backend) and all of them can be in different quality, for example, 1024x576, 768x432, etc. I want to show for user dialog with possibility to change video stream quality. For this i use the next code from exoplayer samples in github:
MappingTrackSelector.MappedTrackInfo mappedTrackInfo = trackSelector.getCurrentMappedTrackInfo();
if (mappedTrackInfo != null) {
CharSequence title = "Tit;eee";
int rendererIndex = 0; // renderer for video
int rendererType = mappedTrackInfo.getRendererType(rendererIndex);
boolean allowAdaptiveSelections =
rendererType == C.TRACK_TYPE_VIDEO
|| (rendererType == C.TRACK_TYPE_AUDIO
&& mappedTrackInfo.getTypeSupport(C.TRACK_TYPE_VIDEO)
== MappingTrackSelector.MappedTrackInfo.RENDERER_SUPPORT_NO_TRACKS);
Pair<AlertDialog, TrackSelectionView> dialogPair =
TrackSelectionView.getDialog(this, title, trackSelector, rendererIndex);
dialogPair.second.setShowDisableOption(true);
dialogPair.second.setAllowAdaptiveSelections(allowAdaptiveSelections);
dialogPair.first.show();
}
and it works okay. But, i need to customize this dialog, for example deleting "none" option and making ALL elements single choice only. How can I make this?

This might be late but here are the ways to do so,
Here the main Class that does all these awesome stuff is "TrackSelectionView", this class simply extends a LinearLayout. To achieve your desired features you need to make your own class (name is anything) and then just copy paste the entire code of TrackSelectionView in it. Why are we doing so? coz, we need to change some logic of that class and it's a read-only class.
Actually to achieve the first feature (no "none" option) you simply can write dialogPair.second.setShowDisableOption(false); instead of that "true".
Writing our own class and copy-paste code is for the second feature.
In "TrackSelectionView" it uses a 2-D array to store the CheckedTextView. For the first two togglebuttons (Auto and None) it uses CheckedTextView separately but for all other resolution, CheckedTextView is getting stored in that 2-D array.
I won't post the entire codebase here as it will make things messy, I have created a github.gist file, you can get a reference from there...
https://gist.github.com/abhiint16/b473e9b1111bd8bda4833c288ae6a1b4
Don't forget to use your class reference instead of TrackSelectionView.
You will use this above file as shown in this Gist
https://gist.github.com/abhiint16/165449a1a7d1a55a8f69d23718c603c2
The Gist file makes the selection "Single-select" and in addition to that it also performs an awesome stuff for you in case you need it in your ExoPlayer,
Here, the actual video format that you get is in 512 X 288, 0.57 Mbps format in a list, I'm just mapping predefined Low, Medium, High etc with the index of list. You can try your own way.
So when you click on one of the resolution, it transforms the textview of your exoplayer for the selected-resolution ("L" for "Low").
For that you just need to implement an Interface named GetReso in your Class and there you'll get the selected text-initial. Now you can just set that string to a textview.
Enjoy coding.....

For anyone seeing this after 2021, it took me quite a while to achieve a similar scenario in demo application of Exoplayer. So, I've decided to share how I solved it.
Use the below code inside TrackSelectionDialog file.
TrackNameProvider trackNameProvider = new DefaultTrackNameProvider(getResources());
TrackSelectionView trackSelectionView = rootView.findViewById(R.id.exo_track_selection_view);
trackSelectionView.setTrackNameProvider(f -> f.height != Format.NO_VALUE ? (Math.round(f.frameRate) + " FPS, " + (f.bitrate == Format.NO_VALUE ? "" : (getResources().getString(R.string.exo_track_bitrate, f.bitrate / 1000000f)) ) + ", " + f.height + " P"): trackNameProvider.getTrackName(f));
This will show video tracks as "25 FPS, 2.11 Mbps, 720P". You can modify it in any way you want.
Important to note that it will keep the default formatting for audio and text tracks.

Related

Get the bit depth or the color space of a mp4 file in Android

I'm currently working on a video player on Android. The video player should support 8 bit content and 10 bit content. Because the flow in the app is different, I need to know before playing the video if the content is 10-bit BT2020. I've tried MediaMetadataRetriever but there is no information about the bit depth, color space, color primaries, transfer characteristics etc. Also, I got the same result using this project: https://github.com/wseemann/FFmpegMediaMetadataRetriever.
Is there a way to get some more information about the color space or bit depth in Android? Something similar to MediaInfo tool. https://mediaarea.net/en/MediaInfo
After some time I found out that I can use MediaExtractor then get the information that I needed from a MediaFormat object created with extractor.getTrackFormat(trackIndex). For HDR10 I check the color standard and the transfer function.
mediaFormat.containsKey(MediaFormat.KEY_COLOR_STANDARD)
) {
if (mediaFormat.getInteger(MediaFormat.KEY_COLOR_TRANSFER) == MediaFormat.COLOR_TRANSFER_ST2084
&& mediaFormat.getInteger(MediaFormat.KEY_COLOR_STANDARD) == MediaFormat.COLOR_STANDARD_BT2020
) {
return true
}
}

Android Flutter Analyze Audio Waveform

I want to create a music app that has a view that resembles the one of SoundCloud, this one to be clear: This
I thought of creating a class like this for each bar:
class Bar {
const Bar(this.alreadyPlayed, this.index, this.height);
final bool alreadyPlayed;
final int index;
final double height;
}
where alreadyPlayed is a bool that tells if the bar should be colored or Greyed out, index is the number of the bar and height, well is the height of the bar. The first two Variables shouldn't be difficult to obtain, my problem is to obtain the height of the bar, so the intensity of the music at that time. This is already enough, but even better if someone knows how to calculate the intensity of a specific frequency, for example, 225 Hz, that could be useful.
But anyway, if it helps, I am adding what I'm trying to achieve in pseudocode:
// Obtain the mp3 file.
//
// Define a number of bars decided from the song length
// or from a default, for example, 80.
//
// In a loop that goes from 0 to the number of bars create
// a Bar Object with the default alreadyPlayed as 0, index
// as the index and the height as a 0.
//
// Obtain the intensity of the sound in a way like this:
// sound[time_in_milliseconds = song_lenght_in_milliseconds / num_of_bars ],
// and then set the height of the bar as the just found intensity.
Is what I'm asking possible?
Looks like you're looking into generating waveform graphs from audio. Have you tried anything so far?
There's no short answer here though. You can start exploring with flutter_ffmpeg to generate waveform data from audio. It's up to you on what format you'll use for your waveform data. Once you got your data, you can generate waveform graphs in Flutter using CustomPaint. You can check the sample on this blog post. The waveform data used in the sample is in JSON.
I'm looking for a way to listen microphone and make some audio analysis with flutter and I found some here that's may help you:
Is a two articles sequence that's explain step-by-step to draw Waveforms with Flutter
Generating Waveform Data - Audio Representation:
https://matt.aimonetti.net/posts/2019-06-generating-waveform-data-audio-representation/
Drawing Waveforms in Flutter: https://matt.aimonetti.net/posts/2019-07-drawing-waveforms-in-flutter/
I hope this help you

How to perform 'SKIP'/'SEEK' action in libpng with apng patch

I am trying to port libpng/apng to Android platform, using libpng with apng patch to read animated png file.
My question is that I didn't find any 'skip' method declared in png.h. What I want to do is like to directly jump to a specific frame. But I cannot get a correct result unless I read from the beginning and perform png_read_frame_head() and png_read_image() for every frame ahead.
Is there any way that can jump to a specific frame by specifying the index without read all the frame info/data ahead?
The following code is from apng sample http://littlesvr.ca/apng/tutorial/x57.html You can see it reads the apng file in a loop. And it seems that you have to call png_read_frame_head() and png_read_image() in order to make the internal information in the png_ptr_read and info_ptr_read updated. So, if there is any way to simply modify these two struct to correct information prepared for reading a specific frame, my question is solved.
for(count = 0; count < png_get_num_frames(png_ptr_read, info_ptr_read); count++)
{
sprintf(filename, "extracted-%02d.png", count);
newImage = fopen(filename, "wb");
if(newImage == NULL)
fatalError("couldn't create png for writing");
writeSetup(newImage, &png_ptr_write, &info_ptr_write);
if(setjmp(png_ptr_write->jmpbuf))
fatalError("something didn't work, jump 2");
png_read_frame_head(png_ptr_read, info_ptr_read);
if(png_get_valid(png_ptr_read, info_ptr_read, PNG_INFO_fcTL))
{
png_get_next_frame_fcTL(png_ptr_read, info_ptr_read,
&next_frame_width, &next_frame_height,
&next_frame_x_offset, &next_frame_y_offset,
&next_frame_delay_num, &next_frame_delay_den,
&next_frame_dispose_op, &next_frame_blend_op);
}
else
{
/* the first frame doesn't have an fcTL so it's expected to be hidden,
* but we'll extract it anyway */
next_frame_width = png_get_image_width(png_ptr_read, info_ptr_read);
next_frame_height = png_get_image_height(png_ptr_read, info_ptr_read);
}
writeSetup2(png_ptr_read, info_ptr_read, png_ptr_write, info_ptr_write,
next_frame_width, next_frame_height);
png_write_info(png_ptr_write, info_ptr_write);
png_read_image(png_ptr_read, rowPointers);
png_write_image(png_ptr_write, rowPointers);
png_write_end(png_ptr_write, NULL);
png_destroy_write_struct(&png_ptr_write, &info_ptr_write);
fclose(newImage);
printf("extracted frame %d into %s\n", count, filename);
}
You can't. libpng was designed to treat PNG data as a stream, so it decodes chunks sequentially, one by one. Not sure why you need to skip APNG frames. Just like in video formats, one frame might be stored as "what changed after previous frame", instead of full frame, so you might need previous frame(s) too.
These code examples might be useful:
https://sourceforge.net/projects/apng/files/libpng/examples/

how to merge Images and impose on each other

Suppose I'm uploading two or more than two pics in some Framelayout. Hereby I'm uploading three pics with a same person in three different position in all those three pictures. Then what image processing libraries in Android or java or Native's are available to do something as shown in the pic.
I would like to impose multiple pictures on each other.
Something like these:-
One idea is to :
Do some layering in all those pictures and find mismatching areas in the pics and merge them.
How one can merge multiple picture with other? By checking the di-similarity and merge with each other?
Are there any Third party Api's or some Photoshop service which can help me in doing these kinda image processing?
In this case you are not just trying to combine the images. You really want to combine a scene containing the same object in different positions.
Therefore, it is not just a simple combination or an alpha compositve where the color of a given pixel in the output image is the sum of the value of this pixel in each image, divided by the number of images.
In this case, you might do:
Determine the scene background analysing the pixels that do not change considering multiple images.
Begin with the output image being just the background.
For each image, remove the background to get the desired object and combine it with the output image.
There is a Marvin plug-in to perform this task, called MergePhoto. The program below use that plug-in to combine a set of parkour photos.
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinPluginLoader;
public class MergePhotosApp {
public MergePhotosApp(){
// 1. load images 01.jpg, 02.jpg, ..., 05.jpg into a List
List<MarvinImage> images = new ArrayList<MarvinImage>();
for(int i=1; i<=5; i++){
images.add(MarvinImageIO.loadImage("./res/0"+i+".jpg"));
}
// 2. Load plug-in and process the image
MarvinImagePlugin merge = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.combine.mergePhotos");
merge.setAttribute("threshold", 38);
// 3. Process the image list and save the output
MarvinImage output = images.get(0).clone();
merge.process(images, output);
MarvinImageIO.saveImage(output, "./res/merge_output.jpg");
}
public static void main(String[] args) {
new MergePhotosApp();
}
}
The input images and the output image are shown below.
I don't know if this will qualify to be in your definition of "natives", but there is the following .NET library that could help: http://dynamicimage.apphb.com/
If the library itself can give you want you want, then depending on your architecture you could set up a small ASP.NET site to do the image manipulation on the server.
Check the accepted answer here.
In above link there is merging of two images which is done by openCV sdk.
If you dont want to use openCV and just want to try with your self then you will have to play little with framlayout and with three imageview. Give options to user to select specific part of the image to show for all three images. So the selected part will be shown of the selected image. on this way you will get the result like above what you have said.
Hope you got my point. If not then let me know.
Enjoy coding... :)
You can overlay the images using openCV you can check at OpenCV and here or here
// Read the main background image
cv::Mat image= cv::imread("Background.png");
// Read the mans character image to be placed
cv::Mat character= cv::imread("character.png");
// define where you want to place the image
cv::Mat newImage;
//The 10,10 are the initial coordinates in pixels
newImage= image(cv::Rect(10,10,character.cols,character.rows));
// add it to the background, The 1 is the aplha values
cv::addWeighted(newImage,1,character,1,0,newImage);
// show result
cv::namedWindow("with character");
cv::imshow("with character",image);
//Write Image
cv::imwrite("output.png", newImage);
or you can create it as a watermark effect
Or you can try it in java like merging two images
try using this class
public class MergeImages {
public static void main(String[] args) {
File inner = new File("Inner.png");
File outter = new File("Outter.png");
try {
BufferedImage biInner = ImageIO.read(inner);
BufferedImage biOutter = ImageIO.read(outter);
System.out.println(biInner);
System.out.println(biOutter);
Graphics2D g = biOutter.createGraphics();
g.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.8f));
int x = (biOutter.getWidth() - biInner.getWidth()) / 2;
int y = (biOutter.getHeight() - biInner.getHeight()) / 2;
System.out.println(x + "x" + y);
g.drawImage(biInner, x, y, null);
g.dispose();
ImageIO.write(biOutter, "PNG", new File("Outter.png"));
} catch (Exception e) {
e.printStackTrace();
}
}
}

How the "FBReader" do the pagination of html files in epub

I'm trying to make an epub reader
I want to do the pagination like fbreader does
Now I have source code of fbreader, but I don't know where it implement pagination
I have my implementation on other features
All I need from fbreader is the pagination
Is there anyone who have done the similar thing?
Thanks for your time to read this question.
ps: the pagination is to spit html file to pages, depending on the size of screen and size of font, and language is also in consideration, when changed the font size, the page number also changed. And epub file content is html format
It is fascinating code. I would love to see a translation of the original student project (but I presume the original document is in Russian). As this is a port of a C++ project it has an interesting style of coding in places.
The app keeps track of where you are in the book by using paragraph cursors (ZLTextParagraphCursor). This situation is comparative with database cursors and record pagination. The class that is responsible for serving up the current page and calculating the number of pages is ZLTextView.
As epubs are reflowable documents and not page-oriented there isn't really a concrete definition of a page - it just depends on where in the document you happen to be looking (paragraph, word, character) and with what display settings.
As McLaren says, FBReader doesn't implement pagination: It uses the ZLibrary, which is available from the same website as FBReader.
The original code uses this to calculate the current page number:
size_t ZLTextView::pageNumber() const {
if (textArea().isEmpty()) {
return 0;
}
std::vector<size_t>::const_iterator i = nextBreakIterator();
const size_t startIndex = (i != myTextBreaks.begin()) ? *(i - 1) : 0;
const size_t endIndex = (i != myTextBreaks.end()) ? *i :
textArea().model()->paragraphsNumber();
return (myTextSize[endIndex] - myTextSize[startIndex]) / 2048 + 1;
}
The Java version uses this function to compute the page number:
private synchronized int computeTextPageNumber(int textSize) {
if (myModel == null || myModel.getParagraphsNumber() == 0) {
return 1;
}
final float factor = 1.0f / computeCharsPerPage();
final float pages = textSize * factor;
return Math.max((int)(pages + 1.0f - 0.5f * factor), 1);
}
This is located in org.geometerplus.zlibrary.text.view.TextView
It's so simplistic, though, that you might as well implement your own.
How I understood it is that it uses 3 bitmaps previous current and next. What they have done is written a text which gets stored and read over this 3 bitmaps. Over as what you see on the top they calculate paragraphs data of how long it is for the scroll you see on the others example. You can start reverse engineering at android.view package class bitmapManager. This should explain everything about how they do their paging.

Categories

Resources