I was trying to process the live camera feed from flutter so I needed to send the byte data to nativve Android for processing.
I was concatenating the 3 planes as suggested by flutterfire. I needed the data to create an InputImage for the google ml kit.
Concatenate Plane method
static Uint8List _concatenatePlanes(List<Plane> planes) {
final WriteBuffer allBytes = WriteBuffer();
for (Plane plane in planes) {
allBytes.putUint8List(plane.bytes);
}
return allBytes.done().buffer.asUint8List();
}
Input Image created from the image data in native Android
public void fromByteBuffer(Map<String, Object> imageData, final MethodChannel.Result result) {
byte[] bytes = (byte[]) imageData.get("bytes");
int rotationCompensation = ((int) imageData.get("rotation")) % 360;
//Create an input image
InputImage inputImage = InputImage.fromByteArray(bytes,
(int) imageData.get("width"),
(int) imageData.get("height"),
rotationCompensation,
InputImage.IMAGE_FORMAT_NV21);
}
I am not getting any results after processing a frame from camera stream, but if the same frame is captured, stored and then processed by creating the Input Image from file path the image is being processed properly.
Is there anything wrong in the way I am creating the input image.
Any help is appreciated.
Related
I am trying to setup a RTSP server in Xamarin android..
Actually already setup RTSP server ,but this server is expecting
raw sps,pps data (bytes data) to stream...
so my question is how to collect byte data from camera/Surface view
I tried with old codes like onpreviewframe,previewcallback which is working ,
but these are depreciated..this below code i did in onpreviewframe callback method...
in short...how to collect bytes raw data from camera in recent android versions...
private void Video_source_ReceivedYUVFrame(uint timestamp_ms, int width, int height, byte[] yuv_data)
{
byte[] raw_video_nal = h264_encoder.offerEncoder(yuv_data);
byte[] spsPpsInfo = h264_encoder.GetRawSPSPPS();
bool isKeyframe = true;
List<byte[]> nal_array = new List<byte[]>();
Boolean add_sps_pps_to_keyframe = true;
if (add_sps_pps_to_keyframe && isKeyframe)
{
nal_array.Add(raw_sps);
nal_array.Add(raw_pps);
}
nal_array.Add(raw_video_nal);
rtspServer.FeedInRawSPSandPPS(raw_sps);
rtspServer.FeedInRawNAL(timestamp_ms, nal_array);
}
I am trying to use a MediaCodec decoder (via the NDK API) to fetch video frames (for further processing) from a .mp4 file. Here is the sample code that sets up the decoder to render to a surface (owned by an ImageReader):
// Omitting most error handling for clarity
AMediaExtractor* ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSourceFd(ex, fd /*opened previously*/, outStart, outLen);
close(fd);
int numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec* decoder = NULL;
for (int i = 0; i < numtracks; i++) {
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
const char *s = AMediaFormat_toString(format);
LOGV("track %d format: %s", i, s);
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {
LOGV("no mime type");
return JNI_FALSE;
} else if (!strncmp(mime, "video/", 6)) {
AMediaExtractor_selectTrack(ex, i);
decoder = AMediaCodec_createDecoderByType(mime);
AImageReader* imageReader;
ANativeWindow* surface;
// This setting doesn’t works
media_status_t status = AImageReader_new(480, 360, AIMAGE_FORMAT_RGBA_8888, 1, &imageReader);
// This setting works
//media_status_t status = AImageReader_new(480, 360, AIMAGE_FORMAT_YUV_420_888, 1, &imageReader);
status = AImageReader_getWindow(imageReader, &surface);
// Configure the decoder to render to a surface
AMediaCodec_configure(codec, format, surface, NULL, 0);
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
Elsewhere, here below is how I am setting up the callback for the ImageReader:
AImageReader_ImageListener* imageListener = new AImageReader_ImageListener();
imageListener->onImageAvailable = &imageCallback;
AImageReader_setImageListener(imageReader, imageListener);
And finally, here below is how the callback looks like:
void imageCallback(void *context, AImageReader *reader) {
int32_t format;
media_status_t status = AImageReader_getFormat (reader, &format);
AImage* image;
status = AImageReader_acquireLatestImage(reader, &image);
status = AImage_getFormat(image, &format);
// further processing to follow
...
}
The issue that I have been facing is that if I configure the ImageReader with RGBA format, the image in the callback always comes out to be NULL:
// Always NULL for ImageReader configured with RGBA
// OK for ImageReader configured with YUV_420_888
AImage* image;
status = AImageReader_acquireLatestImage(reader, &image);
Am I using the NDK API correctly here? One thing I would like to mention is that the RGBA doesn't appears in the list of decoder capabilities as fetched via the following API (not provided via NDK, tried it in the Java layer):
getCodecInfo().getCapabilitiesForType(…).colorFormats
Video decoders normally don't support outputting in RGB format (as you noticed in the colorformats codec info), so that's why this won't work.
The video decoder output can be transparently converted into RGB if you use a surface texture as the output for the decoder - then the decoded video data is available within an OpenGL context. If you then just do a plain 1:1 copy/rendering of the surface texture and have the OpenGL context set up to render into an ImageReader, I would expect you to get the RGB data you need.
It is a bit roundabout (and I'm not sure how easily accessible all the APIs are in a native code context), but should be doable as far as I know.
Our application is related to showing live vidoe data received from other end, we need to display live feeds at an interval of 40 ms ,
The data will receive in YUV Format and it seems android doesn't have any inbuilt support to display the YUV data,
This below is the code to manage and show the data to the Screen,
// convert the data to RGB
feedProcess.decode(yuvBuffer,
yuvBuffer.length, imgInfo, imgRaw, ref,
webMIndex);
currentTime=new Date().getTime();
System.out
.println("took "+(currentTime-lastTime) +" ms to decode the buffer " );
imgQ.add(imgRaw);
In Another thread i will receiving data and converting it into the Bitmap
public void run() {
// TODO Auto-generated method stub
while(myThreadRun){
if(!imgQ.isEmpty()){
try{
byte[] arry=imgQ.poll();
Bitmap b=createImgae(arry, imgInfo[0], imgInfo[1], 1,width,height);
myThreadSurfaceView.setBitmap(b);
try {
// draw the image
c = myThreadSurfaceHolder.lockCanvas(null);
synchronized (myThreadSurfaceHolder) {
myThreadSurfaceView.onDraw(c);
}
} finally {
if (c != null) {
myThreadSurfaceHolder
.unlockCanvasAndPost(c);
}
}
}catch(NoSuchElementException ex){
}
}
}
}
This Entire logic is taking approx 100 ms to refresh the screen with the new image, are there any other approches that i can try it out ,
Decode function uncompress which takes 10-15 ms + convert YUV to RGB ( 20-30)ms , and this is done in JNI Code,
My understanding is , if YUV data can be shown directly then we can save some time from here,
Please tell your views
I'm working on a camera app on android. I'm currently taking my capture with the jpeg callback. I'd like to know if there's a way to get to raw data of the capture. I know there is a raw callback for the capture but it always return null.
So from the jpeg callback can I have access to raw data (succession of RGB pixels).
EDIT :
So from the jpeg callback can I have access to raw data (succession of YUV pixels).
I was successfully able to get a "raw" (YUV422) picture with android 5.1 running on a rk3288.
3 steps to get yuv image
init buffer
call addRawImageCallbackBuffer by relfexion
get the yuv picture in the dedicated callback
code sample
val weight = size.width * size.height * ImageFormat.getBitsPerPixel(ImageFormat.NV21) / 8
val bytes = ByteArray(weight);
val camera = android.hardware.Camera.open();
try {
val addRawImageCallbackBuffer = camera.javaClass
.getDeclaredMethod("addRawImageCallbackBuffer", bytes.javaClass)
addRawImageCallbackBuffer.invoke(camera, bytes)
} catch (e: Exception) {
Log.e("RNG", "Error", e);
}
...
camera.takePicture(null, { data, camera ->
val file = File("/sdcard/output.jpg")
file.createNewFile()
val yuv = YuvImage(data, ImageFormat.NV21, size.width, size.height, null)
yuv.compressToJpeg(Rect(0, 0, size.width, size.height), 80, file.outputStream())
}, null)
Explanation
The Camera.takePicture() method takes a callback for raw as second parameter.
camera.takePicture ( shutterCallback, rawCallback, jpegCallback );
This callback will return a null byteArray, unless I explicitly add a buffer for raw image first.
So, you're supposed to call camera.addRawImageCallbackBuffer for this purpose.
Nevertheless, this the method is not available (public but not exported, so you cannot call it directly).
Fortunately, the code sample demonstrates how to force call this method by reflection.
This will make the raw buffer to push a consistent yuv picture as a parameter.
I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.