I am trying to make 3DR texturing but it always use only vertex colors in texture.
On every frame I store frame as PNG:
RGBImage frame(t3dr_image, 4);
std::ostringstream ss;
ss << dataset_.c_str();
ss << "/";
ss << poses_.size();
ss << ".png";
frame.Write(ss.str().c_str());
poses_.push_back(t3dr_image_pose);
timestamps_.push_back(t3dr_image.timestamp);
In the method Save I am trying to process texturing:
1) I extract full mesh from context
Tango3DR_Mesh* mesh = 0;
Tango3DR_Status ret;
ret = Tango3DR_extractFullMesh(t3dr_context_, &mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
2) Create texturing context using extracted mesh
Tango3DR_ConfigH textureConfig;
textureConfig = Tango3DR_Config_create(TANGO_3DR_CONFIG_TEXTURING);
ret = Tango3DR_Config_setDouble(textureConfig, "min_resolution", 0.01);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
Tango3DR_TexturingContext context;
context = Tango3DR_createTexturingContext(textureConfig, dataset.c_str(), mesh);
if (context == nullptr)
std::exit(EXIT_SUCCESS);
Tango3DR_Config_destroy(textureConfig);
3) Call Tango3DR_updateTexture with data I stored before (this does not work)
for (unsigned int i = 0; i < poses_.size(); i++) {
std::ostringstream ss;
ss << dataset_.c_str();
ss << "/";
ss << i;
ss << ".png";
RGBImage frame(ss.str());
Tango3DR_ImageBuffer image;
image.width = frame.GetWidth();
image.height = frame.GetHeight();
image.stride = frame.GetWidth() * 3;
image.timestamp = timestamps_[i];
//data are for sure in this format
image.format = TANGO_3DR_HAL_PIXEL_FORMAT_RGB_888;
image.data = frame.GetData();
ret = Tango3DR_updateTexture(context, &image, &poses_[i]);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
}
4) Texturize mesh
ret = Tango3DR_Mesh_destroy(mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
mesh = 0;
ret = Tango3DR_getTexturedMesh(context, &mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
5) Save it as OBJ (in the result texture are only data from vertex colors, why?)
ret = Tango3DR_Mesh_saveToObj(mesh, filename.c_str());
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
ret = Tango3DR_destroyTexturingContext(context);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
ret = Tango3DR_Mesh_destroy(mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
All methods returned TANGO_3DR_SUCCESS.
Full code here: https://github.com/lvonasek/tango
Thanks for reaching out and providing the detailed code breakdown.
The error is on our end - the library currently doesn't support RGB texture inputs. It assumes YUV for all input images. I've opened a ticket to track this bug and we'll fix it for the next release, by allowing RGB input and providing better return values for invalid image formats.
Edit: Found another bug on our end. The API states image_pose should be the pose of the image, but our implementation actually expects the pose of the device. I've opened a bug, and this will be fixed in next release (release-H).
You can try working around this for now by passing in the device pose without multiplying the device-to-camera extrinsic calibration, although of course that's just a temp bandaid.
Related
Trying to access the depth map generated by TOF camera by camera2 api of android, but the app always crash when I copy the depth map.
Android os : 10
Android api : 28
arch : armv8a
Qt version : Felgo 3.6.0
Develop on : windows 10 64 bits
ndk : r18b
phone : mate30
Error messages : F libc : Fatal signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr 0x741c259980 in tid 31512 (ImageReader-480), pid 31412 (P.Androidndkcam)
Find the back camera support depth16
std::tuple<std::string, bool> get_camera_depth_id(ACameraManager *cam_manager, int camera_facing)
{
auto camera_ids = get_camera_id_list(cam_manager);
if(camera_ids){
qInfo()<<__func__<<": found camera count "<<camera_ids->numCameras;
for(int i = 0; i < camera_ids->numCameras; ++i){
const char *id = camera_ids->cameraIds[i];
camera_status_t ret = ACAMERA_OK;
auto chars = get_camera_characteristics(cam_manager, id, &ret);
if(ret != ACAMERA_OK){
qInfo()<<__func__<<": cannot obtain characteristics of camera id = "<<id;
continue;
}
auto const entry = get_camera_capabilities(chars.get(), &ret);
if(ret != ACAMERA_OK){
qInfo()<<__func__<<": cannot obtain capabilities of camera id = "<<id;
continue;
}
ACameraMetadata_const_entry lens_info;
ACameraMetadata_getConstEntry(chars.get(), ACAMERA_LENS_FACING, &lens_info);
auto const facing = static_cast<acamera_metadata_enum_android_lens_facing_t>(lens_info.data.u8[i]);
bool is_right_face = facing == camera_facing;
bool support_bc = false, support_depth = false;
for(uint32_t i = 0; i < entry.count; i++) {
if(entry.data.u8[i] == ACAMERA_REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE){
support_bc = true;
}
if(entry.data.u8[i] == ACAMERA_REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT){
support_depth = true;
}
}
qInfo()<<__func__<<" support bc = "<<support_bc<<", support depth = "<<support_depth
<<", is right face = "<<is_right_face;
if(is_right_face && support_depth){
qInfo()<<__func__<<": obtain depth camera id = "<<id;
return {id, support_bc};
}
}
}else{
qInfo()<<__func__<<": cannot get depth cam";
}
return {};
}
open camera
void initCam()
{
qDebug()<<__func__<<": init camera manager";
cameraManager = ACameraManager_create();
qDebug()<<__func__<<": get back facing camera id";
auto [id, support_bc] = get_camera_depth_id(cameraManager, ACAMERA_LENS_FACING_BACK);
//auto const id = get_camera_id(cameraManager, ACAMERA_LENS_FACING_BACK);
qInfo()<<__func__<<": back camera id = "<<id.c_str();
if(!id.empty()){
auto const cam_status =
ACameraManager_openCamera(cameraManager, id.c_str(), &cameraDeviceCallbacks, &cameraDevice);
qInfo()<<__func__<<" cam status = "<<cam_status;
qDebug()<<__func__<<": open camera";
android_cam_info cam_info(*cameraManager, id.c_str());
qInfo()<<__func__<<" print depth stream configuration info";
//print the format, width, height, is input information
cam_info.stream_config(ACAMERA_DEPTH_AVAILABLE_DEPTH_STREAM_CONFIGURATIONS).print();
//obtain minimum widh and height for the depth map
std::tie(width_, height_) =
cam_info.stream_config(ACAMERA_DEPTH_AVAILABLE_DEPTH_STREAM_CONFIGURATIONS).get_minimum_dimension();
imageReader = createJpegReader();
if(imageReader){
imageWindow = createSurface(imageReader);
ANativeWindow_acquire(imageWindow);
ACameraDevice_createCaptureRequest(cameraDevice, TEMPLATE_PREVIEW, &request);
ACameraOutputTarget_create(imageWindow, &imageTarget);
ACaptureRequest_addTarget(request, imageTarget);
ACaptureSessionOutput_create(imageWindow, &imageOutput);
ACaptureSessionOutputContainer_create(&outputs);
ACaptureSessionOutputContainer_add(outputs, imageOutput);
ACameraDevice_createCaptureSession(cameraDevice, outputs, &sessionStateCallbacks, &textureSession);
// Start capturing continuously
ACameraCaptureSession_setRepeatingRequest(textureSession, &captureCallbacks, 1, &request, nullptr);
}
}
}
The way I create the imageReader
AImageReader* createJpegReader()
{
AImageReader* reader = nullptr;
media_status_t status = AImageReader_new(width_, height_, AIMAGE_FORMAT_DEPTH16, 1, &reader);
if(status != AMEDIA_OK){
qInfo()<<__func__<<": cannot create AImageReader, error code is = "<<status;
return nullptr;
}
AImageReader_ImageListener listener;
listener.context = this;
listener.onImageAvailable = imageCallback;
AImageReader_setImageListener(reader, &listener);
return reader;
}
Create surface
ANativeWindow* createSurface(AImageReader* reader)
{
ANativeWindow *nativeWindow;
AImageReader_getWindow(reader, &nativeWindow);
return nativeWindow;
}
The call back of the AImageReader
static void process_depth_16(void* context, AImage *image)
{
uint16_t *data = nullptr;
int len = 0;
auto const status = AImage_getPlaneData(image, 0, reinterpret_cast<uint8_t**>(&data), &len);
if(status != AMEDIA_OK){
qInfo()<<__func__<<": AImage_getPlaneData fail, error code = "<<status;
return;
}
qInfo()<<__func__<<": image len = "<<len;
auto *impl = static_cast<pimpl*>(context);
convert_depth_16_to_cvmat(data, impl->width_, impl->height_);
}
static void imageCallback(void* context, AImageReader* reader)
{
qDebug()<<__func__;
int status = -1;
auto image = get_next_image(reader, &status);
if(status != AMEDIA_OK){
qInfo()<<__func__<<": cannot acquire next image, error code = "<<status;
return;
}
int32_t format = -1;
AImage_getFormat(image.get(), &format);
if(format == AIMAGE_FORMAT_DEPTH16){
process_depth_16(context, image.get());
}else{
qInfo()<<__func__<<": do not support format = "<<format;
}
}
Copy depth map(this function cause the app crash)
cv::Mat convert_depth_16_to_cvmat(uint16_t *data, int32_t width, int32_t height)
{
auto *depth_ptr = data;
cv::Mat output(height, width, CV_16U);
for(int32_t row = 0; row != height; ++row){
depth_ptr += width * row;
auto *output_ptr = output.ptr<ushort>(row);
qInfo()<<__func__<<": row = "<<row; //crash when row equal to 27,sometimes over 50
for(int32_t col = 0; col != width; ++col){
output_ptr[col] = depth_ptr[col];
}
}
return output;
}
Minimum width and height I obtain from the ACAMERA_DEPTH_AVAILABLE_DEPTH_STREAM_CONFIGURATIONS are 480x360. Anything wrong with the codes?
Other utility functions put at pastebin.
Edit: The length of the data is 345600, that means there should have 172800(480x360) pixels with data types uint16_t
I find out the answer, mate30 do not support depth map, but the native api do not tell us that, maybe the functions of the native api related to depth map not that mature yet.
I need to pass the FFMPEG 'raw' data back to my JAVA code in order to display it on the screen.
I have a native method that deals with FFMPEG and after that calls a method in java that takes Byte[] (so far) as an argument.
Byte Array that is passed is read by JAVA but when doing BitmapFactory.decodeByteArray(bitmap, 0, bitmap.length); it returns null. I have printed out the array and I get 200k of elements (which are expected), but cannot be decoded. So far what I'm doing is taking data from AvFrame->data casting it to unsigned char * and then casting that to jbyterArray. After all the casting, I pass the jbyteArray as argument to my JAVA method. Is there something I'm missing here? Why won't BitmapFactory decode the array into an image for displaying?
EDIT 1.0
Currently I am trying to obtain my image via
public void setImage(ByteBuffer bmp) {
bmp.rewind();
Bitmap bitmap = Bitmap.createBitmap(1920, 1080, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(bmp);
runOnUiThread(() -> {
ImageView imgViewer = findViewById(R.id.mSurfaceView);
imgViewer.setImageBitmap(bitmap);
});
}
But I keep getting an exception
JNI DETECTED ERROR IN APPLICATION: JNI NewDirectByteBuffer called with pending exception java.lang.RuntimeException: Buffer not large enough for pixels
at void android.graphics.Bitmap.copyPixelsFromBuffer(java.nio.Buffer) (Bitmap.java:657)
at void com.example.asmcpp.MainActivity.setSurfaceImage(java.nio.ByteBuffer)
Edit 1.1
So, here is the full code that is executing every time there is a frame incoming. Note that the ByteBuffer is created and passed from within this method
void VideoClientInterface::onEncodedFrame(video::encoded_frame_t &encodedFrame) {
AVFrame *filt_frame = av_frame_alloc();
auto frame = std::shared_ptr<video::encoded_frame_t>(new video::encoded_frame_t,
[](video::encoded_frame_t *p) { if (p) delete p; });
if (frame) {
frame->size = encodedFrame.size;
frame->ssrc = encodedFrame.ssrc;
frame->width = encodedFrame.width;
frame->height = encodedFrame.height;
frame->dataType = encodedFrame.dataType;
frame->timestamp = encodedFrame.timestamp;
frame->frameIndex = encodedFrame.frameIndex;
frame->isKeyFrame = encodedFrame.isKeyFrame;
frame->isDroppable = encodedFrame.isDroppable;
frame->data = new char[frame->size];
if (frame->data) {
memcpy(frame->data, encodedFrame.data, frame->size);
AVPacket packet;
av_init_packet(&packet);
packet.dts = AV_NOPTS_VALUE;
packet.pts = encodedFrame.timestamp;
packet.data = (uint8_t *) encodedFrame.data;
packet.size = encodedFrame.size;
int ret = avcodec_send_packet(m_avCodecContext, &packet);
if (ret == 0) {
ret = avcodec_receive_frame(m_avCodecContext, m_avFrame);
if (ret == 0) {
m_transform = sws_getCachedContext(
m_transform, // previous context ptr
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_YUV420P, // src
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_RGB24, // dst
SWS_BILINEAR, nullptr, nullptr, nullptr // options
);
auto decodedFrame = std::make_shared<video::decoded_frame_t>();
decodedFrame->width = m_avFrame->width;
decodedFrame->height = m_avFrame->height;
decodedFrame->size = m_avFrame->width * m_avFrame->height * 3;
decodedFrame->timeStamp = m_avFrame->pts;
decodedFrame->data = new unsigned char[decodedFrame->size];
if (decodedFrame->data) {
uint8_t *dstSlice[] = {decodedFrame->data,
0,
0};// outFrame.bits(), outFrame.bits(), outFrame.bits()
const int dstStride[] = {decodedFrame->width * 3, 0, 0};
sws_scale(m_transform, m_avFrame->data, m_avFrame->linesize,
0, m_avFrame->height, dstSlice, dstStride);
auto m_rawData = decodedFrame->data;
auto len = strlen(reinterpret_cast<char *>(m_rawData));
if (frameCounter == 10) {
jobject newArray = GetJniEnv()->NewDirectByteBuffer(m_rawData, len);
GetJniEnv()->CallVoidMethod(m_obj, setSurfaceImage, newArray);
frameCounter = 0;
}
frameCounter++;
}
} else {
av_packet_unref(&packet);
}
} else {
av_packet_unref(&packet);
}
}
}
}
I am not entirely sure I am even doing that part correctly. If you see any errors in this, feel free to point them out.
You cannot cast native byte arrays to jbyteArray and expect it to work. A byte[] is an actual object with length field, a reference count, and so on.
Use NewDirectByteBuffer instead to wrap your byte buffer into a Java ByteBuffer, from where you can grab the actual byte[] using .array().
Note that this JNI operation is relatively expensive, so if you expect to do this on a per-frame basis, you might want to pre-allocate some bytebuffers and tell FFmpeg to write directly into those buffers.
I am porting my Android AOSP-based distribution from Android K to Android N. It includes a modified version of the Media Player that decodes DVD subtitles.
The architecture of the Media Player evolved a lot between those 2 versions. In particular, it is now split into 3 processes (see https://source.android.com/devices/media/framework-hardening).
I am thus trying to use Shared Memory to make the MediaCodecService send decoded bitmap subtitles to the MediaServer. I modified the contents of the structure that was previously created by MediaCodecService and added a subtitle_fd attribute, file descriptor to the decoded bitmap subtitle. When a message is received by the MediaServer's Nuplayer for rendering, the code tries to map the aforementioned file descriptor.
Unfortunately, the result of the call to ::mmap is always MAP_FAILED.
Do you have an idea of what I missed ?
Code of the MediaCodecService part
AVSubtitleRect *rect = sub->rects[0];
size_t len = sizeof(*rect);
int fd = ashmem_create_region("subtitle rect", len);
ashmem_set_prot_region(fd, PROT_READ | PROT_WRITE);
void* ptr = ::mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ptr == MAP_FAILED) {
ALOGI("%s[%d] dvb ptr == MAP_FAILED", __FUNCTION__, __LINE__);
} else {
ALOGI("Success creating FD with value %d", fd);
}
memcpy(ptr, rect, len);
sub->subtitle_fd = fd;
sub->subtitle_size = len;
Code of the MediaServer part
int fd = mSubtitle->subtitle_fd;
size_t len = mSubtitle->subtitle_size;
ALOGI("Trying to map shared memory with FD = %d", fd);
void* ptr = ::mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ptr == MAP_FAILED) {
ALOGI("Subtitle mmap ptr==MAP_FAILED %s", strerror(errno));
} else {
ALOGI("Subtitle get ptr %p", ptr);
}
AVSubtitleRect *rect = (AVSubtitleRect *)ptr;
Thank you so much !
I have Tire Pressure Management System (TPMS) adapter that plugs into USB (http://store.mp3car.com/USB_TPMS_Version_2_20_4_Sensor_Kit_p/com-090.htm). I have it working with the original Windows software, as well as Linux C code to read the tire pressures and temperature. I'm now trying to use this adapter on Android and am having some difficulty. I can detect the device fine, but my reads are all returning -1 bytes read, whatever I try. Here's the C code I'm trying to convert:
int TpmsPlugin::readUsbSensor(int sid, unsigned char *buf)
{
int r, transferred;
buf[0] = 0x20 + sid;
r = libusb_interrupt_transfer(mDeviceHandle, ENDPOINT_OUT, buf, 1, &transferred, INTR_TIMEOUT);
if (r < 0) {
DebugOut() << "TPMS: USB write interrupt failed, code " << r << endl;
}
r = libusb_interrupt_transfer(mDeviceHandle, ENDPOINT_IN, buf, 4, &transferred, INTR_TIMEOUT);
if (r < 0) {
DebugOut() << "TPMS: USB read interrupt failed, code " << r << endl;
}
return r;
The value of sid is 1, 2, 3 or 4 depending on the wheel. The values are then extracted with:
lfPressure = ((float)buf[0]-40) * PRESSURE_SCALE * KPA_MULTIPLIER;
lfTemperature = (float)buf[1]-40;
You can see full implementation of this driver here as well: https://github.com/otcshare/automotive-message-broker/blob/master/plugins/tpms/tpmsplugin.cpp
My Android version is able to find the USB device, get permission to use it, connect to it, get the UsbEndpoints (it lists two), but whether bulkTransfer() or controlTransfer() I try, I've failed. In particular, I've tried a lot of different controlTransfer values based on all the docs I could find. Here is some code that I've tried:
UsbInterface intf = TpmsSectionFragment.device.getInterface(0);
UsbEndpoint endpoint_in = null, endpoint_out = null;
for (int i = 0; i < intf.getEndpointCount(); i++) {
UsbEndpoint ep = intf.getEndpoint(i);
if (ep.getDirection() == UsbConstants.USB_DIR_IN)
endpoint_in = ep;
else if (ep.getDirection() == UsbConstants.USB_DIR_OUT)
endpoint_out = ep;
}
UsbDeviceConnection connection = gUsbManager.openDevice(TpmsSectionFragment.device);
connection.claimInterface(intf, false);
int timeout = 1000;
int length = 4;
while (true) {
for (int sensorId = 1; sensorId <= 4 && mReadThreadActive; sensorId++) {
byte[] tpmsRaw = new byte[length];
tpmsRaw[0] = (byte) (0x20 + sensorId);
int out_len = connection.bulkTransfer(endpoint_out, tpmsRaw, 1, timeout);
int in_len = connection.bulkTransfer(endpoint_in, tpmsRaw, 4, timeout);
//int out_len = connection.controlTransfer(0x42, 0x0, 0x100, 0, tpmsRaw, tpmsRaw.length, timeout);
//int in_len = connection.controlTransfer(0x41, 0x0, 0x100, 0, tpmsRaw, tpmsRaw.length, timeout);
Any thoughts on what I could be doing wrong are greatly appreciated. I'm happy to try a few different things to debug further if you have any suggestions.
Thanks!
Here's the solution based on the help from Chris. I converted the calls to queue / requestWait:
ByteBuffer buf = ByteBuffer.allocate(4);
buf.put(0, (byte) (0x20 + sensorId));
UsbRequest send = new UsbRequest();
send.initialize(connection, endpoint_out);
Boolean sent = send.queue(buf, 1);
UsbRequest r1 = connection.requestWait();
send.initialize(connection, endpoint_in);
send.queue(buf, 4);
UsbRequest r2 = connection.requestWait();
The other thing I needed to tweak was this call and set the second parameter to true:
connection.claimInterface(intf, true);
That's it. Done. Thanks for the help!
Is there a way to create a video from a series of images on android? Maybe a way to extend the MediaRecorder and being able to take images as input.
I try to really create the video and store it (as an mpeg4 file for instance).
Thanks for any suggestions.
I'm also trying to do the same thing. I have been advice to use Libav.
http://libav.org/
However I need to build it with the NDK and I currently have some issues doing it.
I'm looking for some doc about it. I'll keep you posted.
I've created a post about it: Libav build for Android
You can use AnimationDrawable in an ImageView.
Add frames using the AnimationDrawable.addFrame(Drawable frame, int duration) method, and start the animation using AnimationDrawable.start().
Not sure if that's ideal, but it would work.
I use Android + NDK
AVFrame* OpenImage(const char* imageFileName)
{
AVFormatContext *pFormatCtx = avformat_alloc_context();
std::cout<<"1"<<imageFileName<<std::endl;
if( avformat_open_input(&pFormatCtx, imageFileName, NULL, NULL) < 0)
{
printf("Can't open image file '%s'\n", imageFileName);
return NULL;
}
std::cout<<"2"<<std::endl;
av_dump_format(pFormatCtx, 0, imageFileName, false);
AVCodecContext *pCodecCtx;
std::cout<<"3"<<std::endl;
pCodecCtx = pFormatCtx->streams[0]->codec;
pCodecCtx->width = W_VIDEO;
pCodecCtx->height = H_VIDEO;
//pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
// Find the decoder for the video stream
AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (!pCodec)
{
printf("Codec not found\n");
return NULL;
}
// Open codec
//if(avcodec_open2(pCodecCtx, pCodec)<0)
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)//check this NULL, it should be of AVDictionary **options
{
printf("Could not open codec\n");
return NULL;
}
std::cout<<"4"<<std::endl;
//
AVFrame *pFrame;
pFrame = av_frame_alloc();
if (!pFrame)
{
printf("Can't allocate memory for AVFrame\n");
return NULL;
}
printf("here");
int frameFinished;
int numBytes;
// Determine required buffer size and allocate buffer
numBytes = avpicture_get_size( pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
avpicture_fill((AVPicture *) pFrame, buffer, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
// Read frame
AVPacket packet;
int framesNumber = 0;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if(packet.stream_index != 0)
continue;
int ret = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (ret > 0)
{
printf("Frame is decoded, size %d\n", ret);
pFrame->quality = 4;
return pFrame;
}
else
printf("Error [%d] while decoding frame: %s\n", ret, strerror(AVERROR(ret)));
}
}
int combine_images_to_video(const char * infile_dir, const char * infile_prefix, const char* infile_surname, int total_frames,const char *outfile)
{
if (total_frames <= 0){
std::cout << "Usage: cv2ff <dir_name> <prefix> <image surname> <total frames> <outfile>" << std::endl;
std::cout << "Please check that the 4th argument is integer value of total frames"<<std::endl;
return 1;
}
printf("max %d frames\n",total_frames);
char *imageFileName;
char numberChar[NUMNUMBER];
// initialize FFmpeg library
av_register_all();
// av_log_set_level(AV_LOG_DEBUG);
int ret;
const int dst_width = W_VIDEO;
const int dst_height = H_VIDEO;
const AVRational dst_fps = {30, 1};//{fps,1}
// open output format context
AVFormatContext* outctx = nullptr;
ret = avformat_alloc_output_context2(&outctx, nullptr, nullptr, outfile);
//outctx->video_codec->
if (ret < 0) {
std::cerr << "fail to avformat_alloc_output_context2(" << outfile << "): ret=" << ret;
return 2;
}
// open output IO context
ret = avio_open2(&outctx->pb, outfile, AVIO_FLAG_WRITE, nullptr, nullptr);
if (ret < 0) {
std::cerr << "fail to avio_open2: ret=" << ret;
return 2;
}
// create new video stream
AVCodec* vcodec = avcodec_find_encoder(outctx->oformat->video_codec);
AVStream* vstrm = avformat_new_stream(outctx, vcodec);
if (!vstrm) {
std::cerr << "fail to avformat_new_stream";
return 2;
}
avcodec_get_context_defaults3(vstrm->codec, vcodec);
vstrm->codec->width = dst_width;
vstrm->codec->height = dst_height;
vstrm->codec->pix_fmt = vcodec->pix_fmts[0];
vstrm->codec->time_base = vstrm->time_base = av_inv_q(dst_fps);
vstrm->r_frame_rate = vstrm->avg_frame_rate = dst_fps;
if (outctx->oformat->flags & AVFMT_GLOBALHEADER)
vstrm->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
// open video encoder
ret = avcodec_open2(vstrm->codec, vcodec, nullptr);
if (ret < 0) {
std::cerr << "fail to avcodec_open2: ret=" << ret;
return 2;
}
std::cout
<< "outfile: " << outfile << "\n"
<< "format: " << outctx->oformat->name << "\n"
<< "vcodec: " << vcodec->name << "\n"
<< "size: " << dst_width << 'x' << dst_height << "\n"
<< "fps: " << av_q2d(dst_fps) << "\n"
<< "pixfmt: " << av_get_pix_fmt_name(vstrm->codec->pix_fmt) << "\n"
<< std::flush;
// initialize sample scaler
SwsContext* swsctx = sws_getCachedContext(
nullptr, dst_width, dst_height, AV_PIX_FMT_BGR24,
dst_width, dst_height, vstrm->codec->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
if (!swsctx) {
std::cerr << "fail to sws_getCachedContext";
return 2;
}
// allocate frame buffer for encoding
AVFrame* frame = av_frame_alloc();
std::vector<uint8_t> framebuf(avpicture_get_size(vstrm->codec->pix_fmt, dst_width, dst_height));
avpicture_fill(reinterpret_cast<AVPicture*>(frame), framebuf.data(), vstrm->codec->pix_fmt, dst_width, dst_height);
frame->width = dst_width;
frame->height = dst_height;
frame->format = static_cast<int>(vstrm->codec->pix_fmt);
// encoding loop
avformat_write_header(outctx, nullptr);
int64_t frame_pts = 0;
unsigned nb_frames = 0;
bool end_of_stream = false;
int got_pkt = 0;
int i =0;
imageFileName = (char *)malloc(strlen(infile_dir)+strlen(infile_prefix)+NUMNUMBER+strlen(infile_surname)+1);
do{
if(!end_of_stream){
strcpy(imageFileName,infile_dir);
//strcat(imageFileName,"/");
strcat(imageFileName,infile_prefix);
sprintf(numberChar,"%03d",i+1);
strcat(imageFileName,numberChar);
//strcat(imageFileName,".");
strcat(imageFileName,infile_surname);
__android_log_print(1, "RecordingImage", "%s", imageFileName);
std::cout<<imageFileName<<std::endl;
AVFrame* frame_from_file = OpenImage(imageFileName);
if(!frame_from_file){
std::cout<<"error OpenImage"<<std::endl;
return 5;
}
//const int Stride [] = {1920};
sws_scale(swsctx, frame_from_file->data, &STRIDE , 0, frame_from_file->height, frame->data, frame->linesize);
frame->pts = frame_pts++;
av_frame_free(&frame_from_file);
}
// encode video frame
AVPacket pkt;
pkt.data = nullptr;
pkt.size = 0;
av_init_packet(&pkt);
ret = avcodec_encode_video2(vstrm->codec, &pkt, end_of_stream ? nullptr : frame, &got_pkt);
if (ret < 0) {
std::cerr << "fail to avcodec_encode_video2: ret=" << ret << "\n";
return 2;
}
// rescale packet timestamp
pkt.duration = 1;
av_packet_rescale_ts(&pkt, vstrm->codec->time_base, vstrm->time_base);
// write packet
av_write_frame(outctx, &pkt);
std::cout << nb_frames << '\r' << std::flush; // dump progress
++nb_frames;
av_free_packet(&pkt);
i++;
if(i==total_frames-1)
end_of_stream = true;
} while (i<total_frames);
av_write_trailer(outctx);
std::cout << nb_frames << " frames encoded" << std::endl;
av_frame_free(&frame);
avcodec_close(vstrm->codec);
avio_close(outctx->pb);
avformat_free_context(outctx);
free(imageFileName);
return 0;
}
We can create video from images using ffmpeg.
Check out my post for using ffmpeg in android.
Use below command to create video from images placed in same folder
String command[]={"-y", "-r","1/5" ,"-i",src.getAbsolutePath(),
"-c:v","libx264","-vf", "fps=25","-pix_fmt","yuv420p", dest.getAbsolutePath()};
Here ,
src.getAbsolutePath() is the absolute path of all your input images.
For example,
If all your images are stored in Images folder inside Pictures directory with names
extract_picture001.jpg,extract_picture002.jpg,extract_picture003.jpg......
.
Then,
String filePrefix = "extract_picture";
String fileExtn = ".jpg";
File picDir = Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES);
File dir = new File(picDir, "Images");
File src = new File(dir, filePrefix + "%03d" + fileExtn);
For creating video from images placed in different folder you
have to create a text file and add image paths to it and then specify
the path of that text file as an input option.
Example,
Text File
file '/storage/emulated/0/DCIM/Camera/P_20170807_143916.jpg'
duration 2
file '/storage/emulated/0/DCIM/Pic/P_20170305_142948.jpg'
duration 5
file '/storage/emulated/0/DCIM/Camera/P_20170305_142939.jpg'
duration 6
file '/storage/emulated/0/DCIM/Pic/P_20170305_142818.jpg'
duration 2
Command
String command[] = {"-y", "-f", "concat", "-safe", "0", "-i", textFile.getAbsolutePath(), "-vsync", "vfr", "-pix_fmt", "yuv420p", dest.getAbsolutePath()};
where textFile.getAbsolutePath() is the absolute path of your text file
Check out this ffmpeg doc for more info