I'm using the Sony Camera API for taking a picture remotely, but when I rotate the camera streaming image is still vertical. API ver. is 2.40. I checked the Reference of API but there is nothing about orientation of live view
Look to api getevent (pooling or callback) , you will see a liveviewOrientation value in the result.
{
"type":"liveviewOrientation",
"liveviewOrientation":"90"
}
Then you have to rotate your output stream.
It's device dependant
Rotate:
Read the stream, look for Common Header to decode "Packet",then extract JPEG image from one "Packet",decode it.
Rotate the image according to liveviewOrientation, then show it to your display.
Related
In KinesisVideoClient for creating media source by using AndroidCameraMediaSourceConfiguration. it has camera orientation option but i doesnt work -- Streaming video getting rotated
withCameraOrientation(-90) -- it has default value -- i am trying to change the angle variation but getting the rotation wrong always
I'm using ndk to create application that captures through camera and displays the images. I succeed to capture and display images but the rotation of images is not normal. So, i tried to change the rotation of images by using ACaptureRequest_setEntry_u8 function. But it returns this error message in logcat(E/ACameraMetadata: Error: tag 917518 is not writable!). I tried to check whether ACAMERA_SENSOR_ORIENTATION is included in ACameraMetadata and it is.
Below is the code. I used two days to correct this problem. Please help me.
ACameraDevice_createCaptureRequest(mainCameraDevice, TEMPLATE_STILL_CAPTURE,
&mainCaptureRequest);
uint8_t degree = 90;
ACaptureRequest_setEntry_u8(mainCaptureRequest, ACAMERA_SENSOR_ORIENTATION, 1, °ree);
And i'm using ACameraCaptureSession_setRepeatingRequest to capture sequentially.
I'm not using NDK to take a picture but I've done the following to correctly rotate the output image:
https://stackoverflow.com/a/51892093/10159898
Given code are both in Kotlin and Java.
Hope It can help you
I am using camera2Basic example and I fixed exposure time, iso, white balance and etc.
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
//mPreviewBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, DEFAULT_FOCUS_DISTANCE);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_MODE, CameraMetadata.CONTROL_AE_MODE_OFF);
mPreviewBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CameraMetadata.NOISE_REDUCTION_MODE_FAST);
mPreviewBuilder.set(CaptureRequest.EDGE_MODE, CameraMetadata.EDGE_MODE_FAST);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL);
mPreviewBuilder.set(CaptureRequest.CONTROL_AWB_MODE, wbMode);
mPreviewBuilder.set(CaptureRequest.CONTROL_AWB_LOCK, wbLock);
mPreviewBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, isoValue);
mPreviewBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureValue);
I am taking several photos by rotating my phone for stitching.
While taking photos the preview looks fine, but captured results are totally different in terms of brightness
When I checked the images exif data, they have same exposure time and iso (not on Nexus)
I don't know what I am missing.
Any suggestion why it is happening? Thank you
My problem was not using session.setRepeatingRequest() and session.capture() properly. Please check this answer
I am using GPUImage library to compress a video in my iOs app (GPUimageVideoCamera)
https://github.com/BradLarson/GPUImage/
I have worked with it on iOS and it is very fast
I want to do the same in my android app, but it seems that GPUImageMovie class doesn't exist in android library:
https://github.com/CyberAgent/android-gpuimage/tree/master/library/src/jp/co/cyberagent/android/gpuimage
It seems that android library only work on images (no video).
Anyone know if this library can do the job? If not, did someone developed GPUImage all library? If not, what is the best library i can use that can do the job as fast as GPUImage library do.
That's what GPUimageVideoCamera do in iOs (Filtering live video):
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
Is there a library that can do the same in android?
I had this problem with my app (ScareApp) that uses the front facing camera to record video. I "think" I've finally resolved the issue, so thought I would post it here for any developers that run into the same thing....
Basically..
The android MediaRecorder allows you to define the Video and Audio Encoder, and according to the docs, DEFAULT can be used for each.
However, this refers to the main camera's settings, which is often a far higher spec than the front facing camera.
DEFAULT on the Droid Razr for example, selects an encoding (MPEG_4_SP) that isn't available for the Front facing camera, and this results in an empty (0kb) file being produced (or on some other devices a Camera 100 - start failed error).
My other option was to use the CameraProfile.get method to lookup what the HIGH_QUALITY settings, but again, this by default uses the main camera.
To get around this, you can set the ID of the front facing camera by using
CameraProfile.get(<CameraID>, CamcorderProfile.QUALITY_HIGH);
My current work around is as follows:
CamcorderProfile profile = CamcorderProfile.get(FrontFacingCameraId, CamcorderProfile.QUALITY_HIGH);
if(profile != null) {
_recorder.setAudioEncoder(profile.audioCodec);
_recorder.setVideoEncoder(profile.videoCodec);
}else {
//default to basic H263 and AMR_NB if profile not found
_recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
_recorder.setVideoEncoder(MediaRecorder.VideoEncoder.H263);
}
Or alternatively, you can skip setting the Encoders, and just use
_recorder.setProfile(profile);
But as my app allows the user to select the resolution, I need to set the encoder's.
Hopefully this will help someone and save the time and hassle it has caused me!
Cheers,
Mark