I am trying to port libpng/apng to Android platform, using libpng with apng patch to read animated png file.
My question is that I didn't find any 'skip' method declared in png.h. What I want to do is like to directly jump to a specific frame. But I cannot get a correct result unless I read from the beginning and perform png_read_frame_head() and png_read_image() for every frame ahead.
Is there any way that can jump to a specific frame by specifying the index without read all the frame info/data ahead?
The following code is from apng sample http://littlesvr.ca/apng/tutorial/x57.html You can see it reads the apng file in a loop. And it seems that you have to call png_read_frame_head() and png_read_image() in order to make the internal information in the png_ptr_read and info_ptr_read updated. So, if there is any way to simply modify these two struct to correct information prepared for reading a specific frame, my question is solved.
for(count = 0; count < png_get_num_frames(png_ptr_read, info_ptr_read); count++)
{
sprintf(filename, "extracted-%02d.png", count);
newImage = fopen(filename, "wb");
if(newImage == NULL)
fatalError("couldn't create png for writing");
writeSetup(newImage, &png_ptr_write, &info_ptr_write);
if(setjmp(png_ptr_write->jmpbuf))
fatalError("something didn't work, jump 2");
png_read_frame_head(png_ptr_read, info_ptr_read);
if(png_get_valid(png_ptr_read, info_ptr_read, PNG_INFO_fcTL))
{
png_get_next_frame_fcTL(png_ptr_read, info_ptr_read,
&next_frame_width, &next_frame_height,
&next_frame_x_offset, &next_frame_y_offset,
&next_frame_delay_num, &next_frame_delay_den,
&next_frame_dispose_op, &next_frame_blend_op);
}
else
{
/* the first frame doesn't have an fcTL so it's expected to be hidden,
* but we'll extract it anyway */
next_frame_width = png_get_image_width(png_ptr_read, info_ptr_read);
next_frame_height = png_get_image_height(png_ptr_read, info_ptr_read);
}
writeSetup2(png_ptr_read, info_ptr_read, png_ptr_write, info_ptr_write,
next_frame_width, next_frame_height);
png_write_info(png_ptr_write, info_ptr_write);
png_read_image(png_ptr_read, rowPointers);
png_write_image(png_ptr_write, rowPointers);
png_write_end(png_ptr_write, NULL);
png_destroy_write_struct(&png_ptr_write, &info_ptr_write);
fclose(newImage);
printf("extracted frame %d into %s\n", count, filename);
}
You can't. libpng was designed to treat PNG data as a stream, so it decodes chunks sequentially, one by one. Not sure why you need to skip APNG frames. Just like in video formats, one frame might be stored as "what changed after previous frame", instead of full frame, so you might need previous frame(s) too.
These code examples might be useful:
https://sourceforge.net/projects/apng/files/libpng/examples/
Related
I'm trying to make a video-compression algorithm, and one way I want to reduce size is by dropping the frame rate. The code below shows how I achieve this - basically I keep advancing to the next sample until the time difference is greater than the inverse of the desired frame rate.
if (!firstFrame) {
while (!eof && extractor.getSampleTime() - prevSampleTime < 1000000 / frameRate) {
eof = !extractor.advance();
}
}
firstFrame = false;
prevSampleTime = extractor.getSampleTime();
However the obvious problem with this is that dropping a frame means that the next diff frame is going off the wrong frame, resulting in a distorted video. Is there any way to get extract the full image Bitmap at a particular frame? Basically I want to achieve something like this:
Video frames are extracted iteratively and unwanted frames are dropped
Remaining frames are converted into their full Bitmap
All Bitmaps are strung together to form the raw video
Raw video is compressed with AVC compression.
I don't mind how long this process takes as it will be running in the background and the output will not be displayed immediately.
Since most video compression algorithms today use Inter Frame Prediction, some frames (usually most of them) can't be decoded on their own (without feeding previous frames) as you noted in the question.
It means, that you must decode all frames. And then you can drop some of them before encoding.
I have video player in my android app and I make them using exoplayer library. My player can to play .m3u8 videos (I obtain them from backend) and all of them can be in different quality, for example, 1024x576, 768x432, etc. I want to show for user dialog with possibility to change video stream quality. For this i use the next code from exoplayer samples in github:
MappingTrackSelector.MappedTrackInfo mappedTrackInfo = trackSelector.getCurrentMappedTrackInfo();
if (mappedTrackInfo != null) {
CharSequence title = "Tit;eee";
int rendererIndex = 0; // renderer for video
int rendererType = mappedTrackInfo.getRendererType(rendererIndex);
boolean allowAdaptiveSelections =
rendererType == C.TRACK_TYPE_VIDEO
|| (rendererType == C.TRACK_TYPE_AUDIO
&& mappedTrackInfo.getTypeSupport(C.TRACK_TYPE_VIDEO)
== MappingTrackSelector.MappedTrackInfo.RENDERER_SUPPORT_NO_TRACKS);
Pair<AlertDialog, TrackSelectionView> dialogPair =
TrackSelectionView.getDialog(this, title, trackSelector, rendererIndex);
dialogPair.second.setShowDisableOption(true);
dialogPair.second.setAllowAdaptiveSelections(allowAdaptiveSelections);
dialogPair.first.show();
}
and it works okay. But, i need to customize this dialog, for example deleting "none" option and making ALL elements single choice only. How can I make this?
This might be late but here are the ways to do so,
Here the main Class that does all these awesome stuff is "TrackSelectionView", this class simply extends a LinearLayout. To achieve your desired features you need to make your own class (name is anything) and then just copy paste the entire code of TrackSelectionView in it. Why are we doing so? coz, we need to change some logic of that class and it's a read-only class.
Actually to achieve the first feature (no "none" option) you simply can write dialogPair.second.setShowDisableOption(false); instead of that "true".
Writing our own class and copy-paste code is for the second feature.
In "TrackSelectionView" it uses a 2-D array to store the CheckedTextView. For the first two togglebuttons (Auto and None) it uses CheckedTextView separately but for all other resolution, CheckedTextView is getting stored in that 2-D array.
I won't post the entire codebase here as it will make things messy, I have created a github.gist file, you can get a reference from there...
https://gist.github.com/abhiint16/b473e9b1111bd8bda4833c288ae6a1b4
Don't forget to use your class reference instead of TrackSelectionView.
You will use this above file as shown in this Gist
https://gist.github.com/abhiint16/165449a1a7d1a55a8f69d23718c603c2
The Gist file makes the selection "Single-select" and in addition to that it also performs an awesome stuff for you in case you need it in your ExoPlayer,
Here, the actual video format that you get is in 512 X 288, 0.57 Mbps format in a list, I'm just mapping predefined Low, Medium, High etc with the index of list. You can try your own way.
So when you click on one of the resolution, it transforms the textview of your exoplayer for the selected-resolution ("L" for "Low").
For that you just need to implement an Interface named GetReso in your Class and there you'll get the selected text-initial. Now you can just set that string to a textview.
Enjoy coding.....
For anyone seeing this after 2021, it took me quite a while to achieve a similar scenario in demo application of Exoplayer. So, I've decided to share how I solved it.
Use the below code inside TrackSelectionDialog file.
TrackNameProvider trackNameProvider = new DefaultTrackNameProvider(getResources());
TrackSelectionView trackSelectionView = rootView.findViewById(R.id.exo_track_selection_view);
trackSelectionView.setTrackNameProvider(f -> f.height != Format.NO_VALUE ? (Math.round(f.frameRate) + " FPS, " + (f.bitrate == Format.NO_VALUE ? "" : (getResources().getString(R.string.exo_track_bitrate, f.bitrate / 1000000f)) ) + ", " + f.height + " P"): trackNameProvider.getTrackName(f));
This will show video tracks as "25 FPS, 2.11 Mbps, 720P". You can modify it in any way you want.
Important to note that it will keep the default formatting for audio and text tracks.
Based on this thread, is there a way to process an image from camera in QML without saving it?
Starting from the example of the doc the capture() function save the image to Pictures location.
What I would like to achieve, is to process the camera image every second using onImageCaptured but I don't want to save it to the drive.
I've tried to implement a cleanup operation using onImageSaved signal but it's affecting onImageCaptured too .
As explained in this answer you can bridge C++ and QML via the mediaObject. That can be done via objectName (as in the linked answer) or by using a dedicated Q_PROPERTY (more on that later). In either case you should end up with a code like this:
QObject * source // QML camera pointer obtained as described above
QObject * cameraRef = qvariant_cast<QMediaObject*>(source->property("mediaObject"));
Once you got the hook to the camera, use it as a source for a QVideoProbe object, i.e.
QVideoProbe *probe = new QVideoProbe;
probe->setSource(cameraRef);
Connect the videoFrameProbed signal to an appropriate slot, i.e.
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
and that's it: you can now process your frames inside the processFrame function. An implementation of such a function looks like this:
void YourClass::processFrame(QVideoFrame frame)
{
QVideoFrame cFrame(frame);
cFrame.map(QAbstractVideoBuffer::ReadOnly);
int w {cFrame.width()};
int h {cFrame.height()};
QImage::Format f;
if((f = QVideoFrame::imageFormatFromPixelFormat(cFrame.pixelFormat())) == QImage::Format_Invalid)
{
QImage image(cFrame.size(), QImage::Format_ARGB32);
// NV21toARGB32 convertion!!
//
// DECODING HAPPENS HERE on "image"
}
else
{
QImage image(cFrame.bits(), w, h, f);
//
// DECODING HAPPENS HERE on "image"
}
cFrame.unmap();
}
Two important implementation details here:
Android devices use YUV format which is not supported currently by QImage and which should be converted by hand. I've made the strong assumption here that all the invalid formats are YUV. That would be better managed via ifdef's conditionals over the current OS.
The decoding can be quite costy so you can skip frames (simply add a counter to this method) or off load the work to a dedicated thread. That also depend on the pace at which frames are elaborated. Also reducing their size, e.g. taking only a portion of the QImage can greatly improve performances.
For that matter I would avoid at all the objectName approach for fetching the mediaObject and instead I would register a new type so that the Q_PROPERTY approach can be used. I'm thinking about something along the line of this:
class FrameAnalyzer
{
Q_OBJECT
Q_PROPERTY(QObject* source READ source WRITE setSource)
QObject *m_source; // added for the sake of READ function
QVideoProbe probe;
// ...
public slots:
void processFrame(QVideoFrame frame);
}
where the setSource is simply:
bool FrameAnalyzer::setSource(QObject *source)
{
m_source = source;
return probe.setSource(qvariant_cast<QMediaObject*>(source->property("mediaObject")));
}
Once registered as usual, i.e.
qmlRegisterType<FrameAnalyzer>("FrameAnalyzer", 1, 0, "FrameAnalyzer");
you can directly set the source property in QML as follows:
// other imports
import FrameAnalyzer 1.0
Item {
Camera {
id: camera
// camera stuff here
Component.onCompleted: analyzer.source = camera
}
FrameAnalyzer {
id: analyzer
}
}
A great advantage of this approach is readibility and the better coupling between the Camera code and the processing code. That comes at the expense of a (slightly) higher implementation effort.
I'm developing app in which I need to create video with frame around it. So, basically i get video throuh standart camera and then I need to add frame around it. On the picture my video needs to be instead of blue area;
I have already read a tons of information about video processing and post-processing, opencv, ffmpeg etc. Does anyone knows how I can achieve this ?
After many hours I found only one solution - to use ffmpeg. You can build it and use through android jni. In my case I used executable ffmpeg file. In OnCreate I'm installing it from raw and then using its functions. (There are many solutions in the internet and on the StackOverflow about ffmpeg commands)
This is very simple.
Try to understand what I have written.
PImage frame, temp;
void setup()
{
/*Display your frame here*/
frame = get(); // this will capture the screen
}
void movieEvent(Movie m)
{
m.read();
frame.copy(m, 0, 0, m.width, m.height, Xbluestart, Ybluestart, Xblueend, Yblueend);
}
void draw()
{
Image(frame, 0, 0);
}
I think this would slove your problem.
P.S. instead of writing Xbluestart or Xblueend; write the co-ordinates of blue rectangle in there.
I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.