I’m new here in stackoverflow and I hope someone of you can help me.
I developed an application with DJI mobile SDK and I implemented a live stream. The problem is the resolution of the live stream is not 4k and I need 4k. I think the drone provides the secondary stream for the live preview. Is it possible to change the secondary stream to the primary stream which have 4k resolution? And when it is possible how can I do that? Or is it simply possible to increase the resolution of the live stream / secondary stream?
Here is my current implementation:
Initialization of surface texture element for live stream preview:
SurfaceTextureListener surfaceTextureListener = new SurfaceTextureListener(getApplicationContext());
this.videoStreamPreviewTtView.setSurfaceTextureListener(surfaceTextureListener);
This is my listener:
public class SurfaceTextureListener implements TextureView.SurfaceTextureListener {
private final Context context;
public SurfaceTextureListener(Context context) {
this.context = context;
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
if (DroneControl.getCodecManager() == null) {
DroneControl.setCodecManager(new DJICodecManager(this.context, surface, width, height));
DroneControl.getCodecManager().resetKeyFrame();
DroneControl.getCodecManager().enabledYuvData(true);
DroneControl.getCodecManager().setYuvDataCallback(new LiveStreamDataCallback(this.context));
}
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
if (DroneControl.getCodecManager() != null) {
DroneControl.getCodecManager().cleanSurface();
DroneControl.setCodecManager(null);
}
return false;
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
}
And here is my callback function:
public class LiveStreamDataCallback implements Base, DJICodecManager.YuvDataCallback {
private final Context context;
private final long lastUpdate;
public LiveStreamDataCallback(Context context) {
this.context = context;
this.lastUpdate = System.currentTimeMillis();
}
#Override
public void onYuvDataReceived(MediaFormat format, final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
long differenceInMillis = System.currentTimeMillis() - this.lastUpdate;
if (differenceInMillis > SCREEN_SHOT_PERIOD && yuvFrame != null) {
final byte[] bytes = new byte[dataSize];
yuvFrame.get(bytes);
newSaveYuvDataToJPEG(bytes, width, height);
}
}
private void newSaveYuvDataToJPEG(byte[] yuvFrame, int width, int height) {
if (yuvFrame.length < width * height) {
return;
}
int length = width * height;
byte[] u = new byte[width * height / 4];
byte[] v = new byte[width * height / 4];
for (int i = 0; i < u.length; i++) {
u[i] = yuvFrame[length + i];
v[i] = yuvFrame[length + u.length + i];
}
for (int i = 0; i < u.length; i++) {
yuvFrame[length + 2 * i] = v[i];
yuvFrame[length + 2 * i + 1] = u[i];
}
screenShot(yuvFrame, width, height);
}
private void screenShot(byte[] buf, int width, int height) {
ByteArrayOutputStream bOutput = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(buf,
ImageFormat.NV21,
width,
height,
null);
yuvImage.compressToJpeg(new Rect(0,
0,
width,
height), 100, bOutput);
insertIntoDB(Base64.getEncoder().encodeToString(bOutput.toByteArray()));
}
private void insertIntoDB(String base64EncodedContent) {
//only a limit of images will be saved inside the DB to avoid using too much space!
DatabaseUtil.reduceTableContentToMaxContentIfNecessary(this.context, ScreenShotModel.ScreenShotEntry.TABLE_NAME, MAX_KEEP_COUNT_FOR_LIVE_STREAM_SCREEN_SHOTS);
Date now = new Date();
SimpleDateFormat dateFormat = new SimpleDateFormat(DATE_FORMAT_FOR_LOGGING, Locale.GERMANY);
SQLiteDatabase db = DroneControl.getDbWriteAccess(this.context);
//create a new map of values, where column names are the keys
ContentValues values = new ContentValues();
values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_DATA, base64EncodedContent);
values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_CREATED, dateFormat.format(now));
db.insert(ScreenShotModel.ScreenShotEntry.TABLE_NAME, null, values);
}
}
Can't be done.
1080p is max, ocusync can't do any higher than that. The bandwidth isn't high enough, and the hardware in the drone don't support it.
I don't know anyway to do what you ask.
The only thing you can do is to take an still image and download it, but it will be slow, of course, but can be used for image recognition for example. You don't say what you are going to use it for, but since you seem to be looking at frames, that may be a (slow) solution.
From DJI web:
OcuSync
Part of the Lightbridge family, DJI’s newly developed OcuSync transmission system performs far better than Wi-Fi transmission at all transmission speeds. OcuSync also uses more effective digital compression and channel transmission technologies, allowing it to transmit HD video reliably even in environments with strong radio interference. Compared to traditional analog transmission, OcuSync can transmit video at 720p and 1080p – equivalent to a 4-10 times better quality, without a color cast, static interference, flickering or other problems associated with analog transmission. Even when using the same amount of radio transmission power, OcuSync transmits further than analog at 4.1mi (7km)
Thank you very much for your answer and your help!
Actually, I tried to capture 4k images first and transfer it. As you already said I try to use the frames for object detection and I need the best performance I can get. Capturing images and transferring it is very time consuming. I need approximately 1.5 seconds for capture and save the image, 3 more seconds for transfer it and, the biggest surprise for me, to read the SD card to find the newest image takes almost 11 seconds (I tried different SD cards with max class 10). In sum the whole process for one image takes 15.5 seconds, this is way too long for my purpose…
Then I thought I could do it with the live stream. The whole process with live stream takes 500 milliseconds and this is an acceptable value for my project. It is very sobering that it seems to be impossible…
Maybe there is still another opportunity to transfer images from the drone very fast?
I get stills from the live stream, but in another way.
I use the fpv-widget (the live stream is shown on the device), and read the bmp directly from the widget.
In this way you don't have to handle so much data in java.
I even do it from python, and with some quirks I got it into a opencv2 without any marshalling. Can read out 100frames/second in python, so java should be at least that fast.
You might reconsider your way of doing it. Try to avoid repacking the image data.
It's still only 1080, but it's very good quality I must say.
<dji.ux.widget.FPVWidget
android:id="#+id/fpv_widget"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
custom:sourceCameraNameVisibility="true" />
public Bitmap getFrameBitmap() {
fpvWidget = findViewById(R.id.fpv_widget);
return fpvWidget.getBitmap();
}
Related
I am developing custom camera API 2 app, and I notice that the capture format conversion is different on some devices when I use ImageReader callback.
For example in Nexus 4 doesn't work fine and in Nexus5X looks OK, here is the output.
I initialize the ImageReader in this form:
mImageReader = ImageReader.newInstance(320, 240, ImageFormat.YUV_420_888,2);
And my callback is simple callback ImageReader Callback.
mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable( ImageReader reader) {
try {
mBackgroundHandler.post(
new ImageController(reader.acquireNextImage())
);
}
catch(Exception e)
{
//exception
}
}
};
And in the case of Nexus 4: I had this error.
D/qdgralloc: gralloc_lock_ycbcr: Invalid format passed: 0x32315659
When I try to write the raw file in both devices, I have these different images. So I understand that the Nexus 5X image has NV21 codification and the Nexus 4 has YV12 codification.
I found a specification of image format and I try to get the format in ImageReader.
There are YV12 and NV21 options, but obviously, I get the YUV_420_888 format when I try to obtain the format.
int test=mImageReader.getImageFormat();
So is there any way to get the camera input format (NV21 or YV12) to discriminate this codification types in the camera class? CameraCharacteristics maybe?
Thanks in advance.
Unai.
PD: I use OpenGL for displayin RGB images, and I use Opencv to make the conversions to YUV_420_888.
YUV_420_888 is a wrapper that can host (among others) both NV21 and YV12 images. You must use the planes and strides to access individual colors:
ByteBuffer Y = image.getPlanes()[0];
ByteBuffer U = image.getPlanes()[1];
ByteBuffer V = image.getPlanes()[2];
If the underlying pixels are in NV21 format (as on Nexus 4), the pixelStride will be 2, and
int getU(image, col, row) {
return getPixel(image.getPlanes()[1], col/2, row/2);
}
int getPixel(plane, col, row) {
return plane.getBuffer().get(col*plane.getPixelStride() + row*plane.getRowStride());
}
We take half column and half row because this is how U and V (chroma) planes are stored in 420 image.
This code is for illustration, it is very inefficient, you probably want to access pixels at bulk, using get(byte[], int, int), or via a fragment shader, or via JNI function GetDirectBufferAddress in native code. What you cannot use, is method plane.array(), because the planes are guaranteed to be direct byte buffers.
Here useful method which converts from YV12 to NV21.
public static byte[] fromYV12toNV21(#NonNull final byte[] yv12,
final int width,
final int height) {
byte[] nv21 = new byte[yv12.length];
final int size = width * height;
final int quarter = size / 4;
final int vPosition = size; // This is where V starts
final int uPosition = size + quarter; // This is where U starts
System.arraycopy(yv12, 0, nv21, 0, size); // Y is same
for (int i = 0; i < quarter; i++) {
nv21[size + i * 2] = yv12[vPosition + i]; // For NV21, V first
nv21[size + i * 2 + 1] = yv12[uPosition + i]; // For Nv21, U second
}
return nv21;
}
Anyone knows how to get smooth vertical orientation degree in Android?
I already tried OrientationEventListener as shown below but it's very noisy. already tried all rates, Normal, Delay, Game and Fastest, all shown the same result.
myOrientationEventListener = new OrientationEventListener(this, SensorManager.SENSOR_DELAY_NORMAL) {
#Override
public void onOrientationChanged(int arg0) {
orientaion = arg0;
Log.i("orientaion", "orientaion:" + orientaion);
}
};
So there are two things going on that can affect what you need.
Sensor delay. Android provides four different sensor delay modes: SENSOR_DELAY_UI, SENSOR_DELAY_NORMAL, SENSOR_DELAY_GAME, and SENSOR_DELAY_FASTEST, where SENSOR_DELAY_UI has the longest interval between two data points and SENSOR_DELAY_FASTEST has the shortest. The shorter the interval the higher data sampling rate (number of samples per second). Higher sampling rate gives you more "responsive" data, but comes with greater noise, while lower sampling rate gives you more "laggy" data, but more smooth.
Noise filtering. With the above in mind, you need to decide which route you want to take. Does your application need fast response? If it does, you probably want to choose a higher sampling rate. Does your application need smooth data? I guess this is obviously YES given the context of the question, which means you need noise filtering. For sensor data, noise is mostly high frequency in nature (noise value oscillates very fast with time). So a low pass filter (LPF) is generally adequate.
A simple way to implement LPF is exponential smoothing. To integrate with your code:
int orientation = <init value>;
float update_rate = <value between 0 to 1>;
myOrientationEventListener = new OrientationEventListener(this, SensorManager.SENSOR_DELAY_NORMAL) {
#Override
public void onOrientationChanged(int arg0) {
orientation = (int)(orientation * (1f - update_rate) + arg0 * update_rate);
Log.i("orientation", "orientation:" + orientation);
}
};
Larger update_value means the resulting data is less smooth, which should be intuitive: if update_value == 1f, it falls back to your original code. Another note about update_value is it depends on the time interval between updates (related to sensor delay modes). You probably can tune this value to find one works for you, but if you want to know exactly how it works, check the alpha value definition under Electronic low-pass filters -> Discrete-time realization.
I had a similar problem showing an artificial horizon on my device. The low pass filter (LPF) solved this issue.
However you need to consider when you use the orientation angle in degrees and apply the LPF on it blindly, the result is faulty when the device is in portrait mode and turned from left to ride or opposite. The reason for this is the shift between 359 and 0 degree. Therefore I recommend to convert the degree into radians and apply the LPF on the sin and cos values of the orientation angle.
Further I recommend to use a dynamic alpha or update rate for the LPF. A static value for the alpha might be perfect on your device but not on any other.
The following class filters based on radians and uses a dynamic alpha as described above:
import static java.lang.Math.*;
Filter {
private static final float TIME_CONSTANT = .297f;
private static final float NANOS = 1000000000.0f;
private static final int MAX = 360;
private double alpha;
private float timestamp;
private float timestampOld;
private int count;
private int values[];
Filter() {
timestamp = System.nanoTime();
timestampOld = System.nanoTime();
values = new int[0];
}
int filter(int input) {
//there is no need to filter if we have only one
if(values.length == 0) {
values = new int[] {0, input};
return input;
}
//filter based on last element from array and input
int filtered = filter(values[1], input);
//new array based on previous result and filter
values = new int[] {values[1], filtered};
return filtered;
}
private int filter(int previous, int current) {
calculateAlpha();
//convert to radians
double radPrev = toRadians(previous);
double radCurrent = toRadians(current);
//filter based on sin & cos
double sumSin = filter(sin(radPrev), sin(radCurrent));
double sumCos = filter(cos(radPrev), cos(radCurrent));
//calculate result angle
double radRes = atan2(sumSin, sumCos);
//convert radians to degree, round it and normalize (modulo of 360)
long round = round(toDegrees(radRes));
return (int) ((MAX + round) % MAX);
}
//dynamic alpha
private void calculateAlpha() {
timestamp = System.nanoTime();
float diff = timestamp - timestampOld;
double dt = 1 / (count / (diff / NANOS));
count++;
alpha = dt/(TIME_CONSTANT + dt);
}
private double filter(double previous, double current) {
return (previous + alpha * (current - previous));
}
}
For further readings see this discussion.
I have quite a horrible performance when trying to get the pixels of the Camera preview.
The image format is around 600x900.
The preview rate is quite stable 30fps on my HTC one.
As soon as I try to get the pixels of the image the framerate drops to below 5!
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture) {
Bitmap bmp = mTextureView.getBitmap();
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[bmp.getHeight() * bmp.getWidth()];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
}
The performance is so slow, it's not really bearable.
Now my only 'easy' solution is to skip frames to at least keep some visual performance.
But I'd actually like to have that code perform faster.
I would appreciate any ideas and suggestions, maybe someone solved this already ?
UPDATE
getbitmap: 188.341ms
array: 122ms
getPixels: 12.330ms
recycle: 152ms
It takes 190 milliseconds just to get the bitmap !! That's the problem
I digged into this for several hours.
So short answer: I found no way to avoid getBitmap() and increase performance.
The function is known to be slow, I found many similar questions and no results.
However I found another solution which is about 3 times faster and solves the problem for me.
I keep using the TextureView approach, I use it because it gives more freedom on how to display the Camera preview (for example I can display the camera live preview in a small window of my own aspect ratio without distortion)
But to work with the image data I do not use onSurefaceTextureUpdated() anymore.
I registered a callback of the cameraPreviewFrame which gives me the pixel data I need.
So no getBitmap anymore, and a lot more speed.
Fast, new code:
myCamera.setPreviewCallback(preview);
Camera.PreviewCallback preview = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
Image img = new Image(size.width, size.height, "Y800");
}
};
Slow:
private int[] surface_pixels=null;
private int surface_width=0;
private int surface_height=0;
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture)
{
int width,height;
Bitmap bmp= mTextureView.getBitmap();
height=barcodeBmp.getHeight();
width=barcodeBmp.getWidth();
if (surface_pixels == null)
{
surface_pixels = new int[height * width];
} else
{
if ((width != surface_width) || (height != surface_height))
{
surface_pixels = null;
surface_pixels = new int[height * width];
}
}
if ((width != surface_width) || (height != surface_height))
{
surface_height = barcodeBmp.getHeight();
surface_width = barcodeBmp.getWidth();
}
bmp.getPixels(surface_pixels, 0, width, 0, 0, width, height);
bmp.recycle();
Image img = new Image(width, height, "RGB4");
}
I hope this helps some people with the same problem.
If someone should find a way to create a bitmap in a fast manner within onSurfaceTextureUpdated please respond with a code sample.
Please try:
public void OnSurfaceTextureUpdated(SurfaceTexture surface) {
if (IsBusy) {
return;
}
IsBusy = true;
DoBigWork();
IsBusy = false;
}
I wrote a simple Android application that is using MediaMetadataRetriver class to get frames. It works fine, except that I realized that it skips frames.
The video clip I am trying to decode is one shot with the phone camera. Follow relevant code snippets:
MediaMetadataRetriever mediaDataRet = new MediaMetadataRetriever();
mediaDataRet.setDataSource(path);
String lengthMsStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_DURATION);
final long lenMs = Long.parseLong(lengthMsStr);
String widthStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_VIDEO_WIDTH);
int width = Integer.parseInt(widthStr);
String heightStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_VIDEO_HEIGHT);
int height = Integer.parseInt(heightStr);
note the variable lenMs, it holds the clid duration in milliseconds. Then for every frame I do:
int pace = 30; // 30 fps ms spacing
for (long i = 0; i < lenMs; i += pace) {
if (is_abort())
return;
Bitmap bitmap = mediaDataRet.getFrameAtTime(i * 1000); // I tried the other version of this method with OPTION_CLOSEST, with no luck.
if (bc == null)
bc = bitmap.getConfig();
bitmap.getPixels(pixBuffer, 0, width, 0, 0, width, height);
[...]
}
After checking visually I noticed that some frames are skipped (like short sequences). Why? And ho do I avoid this?
Use:
mediaDataRet.getFrameAtTime(i * 1000, MediaMetadataRetriever.OPTION_CLOSEST);
The getFrameAtTime(n) uses OPTION_CLOSEST_SYNC which would give you key frames only.
I had a small question.If i want to make a man run in android one way of doing this is to get images of the man in different position and display them at different positions.But often,this does not work very well and it appears as two different images are being drawn.Is there any other way through which i can implement custom animation.(Like create a custom image and telling one of the parts of this image to move).
The way i do it is to use sprite sheets for example (Not my graphics!):
You can then use a class like this to handle your animation:
public class AnimSpriteClass {
private Bitmap mAnimation;
private int mXPos;
private int mYPos;
private Rect mSRectangle;
private int mFPS;
private int mNoOfFrames;
private int mCurrentFrame;
private long mFrameTimer;
private int mSpriteHeight;
private int mSpriteWidth;
public AnimSpriteClass() {
mSRectangle = new Rect(0,0,0,0);
mFrameTimer =0;
mCurrentFrame =0;
mXPos = 80;
mYPos = 200;
}
public void Initalise(Bitmap theBitmap, int Height, int Width, int theFPS, int theFrameCount) {
mAnimation = theBitmap;
mSpriteHeight = Height;
mSpriteWidth = Width;
mSRectangle.top = 0;
mSRectangle.bottom = mSpriteHeight;
mSRectangle.left = 0;
mSRectangle.right = mSpriteWidth;
mFPS = 1000 /theFPS;
mNoOfFrames = theFrameCount;
}
public void Update(long GameTime) {
if(GameTime > mFrameTimer + mFPS ) {
mFrameTimer = GameTime;
mCurrentFrame +=1;
if(mCurrentFrame >= mNoOfFrames) {
mCurrentFrame = 0;
}
}
mSRectangle.left = mCurrentFrame * mSpriteWidth;
mSRectangle.right = mSRectangle.left + mSpriteWidth;
}
public void draw(Canvas canvas) {
Rect dest = new Rect(getXPos(), getYPos(), getXPos() + mSpriteWidth,
getYPos() + mSpriteHeight);
canvas.drawBitmap(mAnimation, mSRectangle, dest, null);
}
mAnimation - This is will hold the actual bitmap containing the animation.
mXPos/mYPos - These hold the X and Y screen coordinates for where we want the sprite to be on the screen. These refer to the top left hand corner of the image.
mSRectangle - This is the source rectangle variable and controls which part of the image we are rendering for each frame.
mFPS - This is the number of frames we wish to show per second. 15-20 FPS is enough to fool the human eye into thinking that a still image is moving. However on a mobile platform it’s unlikely you will have enough memory 3 – 10 FPS which is fine for most needs.
mNoOfFrames -This is simply the number of frames in the sprite sheet we are animating.
mCurrentFrame - We need to keep track of the current frame we are rendering so we can move to the next one in order.~
mFrameTimer - This controls how long between frames.
mSpriteHeight/mSpriteWidth -These contain the height and width of an Individual Frame not the entire bitmap and are used to calculate the size of the source rectangle.
Now in order to use this class you have to add a few things to your graphics thread. First declare a new variable of your class and then it can be initialised in the constructor as below.
Animation = new OurAnimatedSpriteClass();
Animation.Initalise(Bitmap.decodeResource(res, R.drawable.stick_man), 62, 39, 20, 20);
In order to pass the value of the bitmap you first have to use the Bitmap Factory class to decode the resource. It decodes a bitmap from your resources folder and allows it to be passed as a variable. The rest of the values depend on your bitmap image.
In order to be able to time the frames correctly you first need to add a Game timer to the game code. You do this by first adding a variable to store the time as show below.
private long mTimer;
We now need this timer to be updated with the correct time every frame so we need to add a line to the run function to do this.
public void run() {
while (mRun) {
Canvas c = null;
mTimer = System.currentTimeMillis(); /////This line updates timer
try {
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
Animation.update(mTimer);
doDraw(c);
}....
then you just have to add Animation.draw(canvas); your Draw function and the animation will draw the current frame in the right place.
When you describe : " one way of doing this is to get images of the man in different position and display them at different positions", this is indeed not only a programming technique to render animation but a general principle that is applied in every form of animation : it applies to making movies, making comics, computer gaming, etc, etc.
Our eyes see at the frequency of 24 images per second. Above 12 frames per second, your brain gets the feeling of real, fluid, movement.
So, yes, this is the way, if you got the feeling movement is not fuild, then you have to increase frame rate. But that works.
Moving only one part of an image is not appropriate for a small sprite representing a man running. Nevertheless, keep this idea in mind for later, when you will be more at ease with animation programming, you will see that this applies to bigger areas that are not entirely drawn at every frame in order to decresase the number of computations needed to "make a frame". Some parts of a whole screen are not "recomputed" every time, this technique is called double buffer and you should soon be introduced to it when making games.
But for now, you should start by making your man run, replacing quickly one picture by another. If movement is not fuild either increase frame rate (optimize your program) or choose images that are closer to each other.
Regards,
Stéphane