I came across many solutions on how to preview camera in Activity. But, they all use the deprecated hardware.camera for display.
How to do this using camera2? Any example for this available?
Your natural starting point would be the overview here and set of code samples here. The Camera2Basic samples include code that you wouldn't need if you only want a preview (for example it also contains code necessary for saving a still image). However, I'd recommend starting there and then removing any code that you don't need for your particular purposes.
Related
I am developing an app to do some image processing operations on a captured image. I used camera2basic sample as a base to my project but it sometimes takes long time to capture and sometimes crashes which annoying me.
Is there a simple authentic code simpler than camera2basic to use camera2 API with opencv or can I use intent to use the Android camera.
There are a number of projects which aim to put a robust and simpler wrapper aoerund the Android Camera2 API, and they generally support the earlier camera API also.
The two most poplar at the moment seem to be:
https://github.com/Fotoapparat/Fotoapparat
https://github.com/wonderkiln/CameraKit-Android
They are not perfect - take a look at the issues lists and decide yourself.
I have used Fotoapparat and while I did see at least one issue, I found it simpler and more robust than the basic Camera2Basic example code.
If you just want an image to work with, you can use the standard Android image capture intent to get the default camera app to take a picture for you.
See "Taking Photos Simply" in the Android developer docs.
I must create an Android app that recognizes some objects from the camera (car steering wheel, car wheel). I tried with Haar classifier but without success and I'm running out of time (it's a school project). So I decided to look for another way. I found some other methods for my goal - ORB. I found what should I do in this answer. My problem is that things are messed up in my head. Can you give me a step-by-step answer of what to do to implement the answer from the question in the link I gave:
From extracting the feature points to training the KD tree and using it for every frame from the camera.
Bonus questions:
Can you give a definition of feature point? It's something I couldn't exactly understand.
Will be the detecting slow using ORB? I know OpenCV can be used in native android, wouldn't that make the things faster?
I need to create this app as soon as possible. Please help!
I am currently developing a similar application. I would recommend getting something working with a single reference image first for a couple of reasons:
It's easier to do and understand if you're just starting out, and you can change it later.
For android applications you have limited processing capabilities so more images = lower fps.
You should have a look at the OpenCV tutorials which are quite helpful. Once you go through the “OpenCV for Android SDK” section and understand the three tutorials you can pretty easily add in functionality that will allow you to analyse the video feed.
The basic logic path I'd recommend following when making the app is:
Read in the reference image.
Create and use your FeatureDetector, DescriptorExtractor and DescriptorMatcher.
Use the above to detect keypoints and then descrive keypoints (the first two, don't forget to convert it to a mat and then to greyscale).
Every time you get a frame from your camera repeat step 3. on it and then compare the keypoints in the images (with the third part of 2.).
Use the result to determine if there is a match (if there is then draw a box around it or something).
Get a new frame.
Try making it to work for a single object and then add in others later. Another thing you could add is a screen at the start to allow users to pick what they want to search for.
Also ORB is reasonably fast, especially compared to SIFT and SURF. I get about 3fps on a HTC One with a single reference image.
I am working on an android app that will require the camera. I know that I can use the built in camera app to take photos. However, I would like to have a more custom look (probably another UI and some extras).
Can someone of you guys give me a general approach on how to achieve that? That would be awesome, thank!
Here is a nice tutorial to accomplish it.
Camera Integration with Surface View
You can make your custom changes on the SurfaceView according to your requirements.
This question may sound a little bit complex or ambiguous, but I'll try to make it as clear as I can. I have done lots of Googling and spent lots of time but didn't find anything relevant for windows.
I want to play two videos on a single screen. One as full screen in background and one on top of it in a small window or small width/height in the right corner. Then I want an output which consists of both videos playing together on a single screen.
So basically one video overlays another and then I want that streamed as output so the user can play that stream later.
I am not asking you to write the whole code, just tell me what to do or how to do it or which tool or third party SDK I have to use to make it happen.
update:
Tried a lots of solution.
1.Xuggler- doesn't support Android.
2.JavaCV or JJMPEG- not able to find any tutorial which suggested how to do it?
Now looking for FFMPEG- searched for a long time but not able to find any tutorial which suggest the coding way to do it. I found command line way to how to fix it.
So can anyone suggest or point the tutorial of FFMPEG or tell any other way to
I would start with JavaCV. It's quite good and flexible. It should allow you to grab frames, composite them and write them back to a file. Use FFmpegFrameGrabber and Recorder classes. The composition can be done manually.
The rest of the answer depends on few things:
do you want to read from a file/mem/url?
do you want to save to a file/mem/url?
do you need realtime processing?
do you need something more than simple picture-in-picture?
You could use OpenGL to do the trick. Please note however that you will need to have to render steps, one rendering the first video in a FBO and then the second rendering the second video, using the FBO as TEXTURE0 and the second as EXTERNAL_TEXTURE.
Blending, and all the stuff you want would be done by OpengL.
You can check the source codes here: Using SurfaceTexture in Android and some important information here: Android OpenGL combination of SurfaceTexture (external image) and ordinary texture
The only thing I'm not sure is what happens when two instances of mediaplayer are running in Parallel. I guess it should not be a problem.
ffmpeg is a very active project, lot's of changes and releases all the time.
You should look at the Xuggler project, this provides a Java API for what you want to do, and they have tight integration with ffmpeg.
http://www.xuggle.com/xuggler/
Should you choose to go down the Runtime.exec() path, this Red5 thread should be useful:
http://www.nabble.com/java-call-ffmpeg-ts15886850.html
From my activity I do startActivityForResult(MediaStore.ACTION_IMAGE_CAPTURE),
and then I land in the builtin camera activity (in this case in the emulator).
When I now do:
"solo.clickOnButton(0);"
in my testcase, it does not find ANY button (null is found for index=0).
How do I write a Solo/Robotium testcase that uses the builtin camera to take a picture ?
according to the Robotium doc you cannot do this as this spans two applications ( your one and the default camera activity ). See http://code.google.com/p/robotium/wiki/QuestionsAndAnswers
You will either need to write your own camera implementation within your package or write two test applications
Hope this helps :)
Sorry to bump this...
I've just put the camera stub i made / use on the play store... thought might be of use to you / others for testing the camera in automated tests :)
https://play.google.com/store/apps/details?id=com.hitherejoe.CameraStub&hl=en
What are you trying to achieve is definitely feasible. You are trying to do that via System built-in functionality. Issue here is that user is expected to take picture and confirm that it is valid. Than, result (image URL) is brought back to your activity. So, it is not robotium.
Another approach is to use fact that Android offers you complete control over Camera via
android.hardware.Camera;
It is definitely more demanding approach. But if you use existing example from your Android installation as guideline
android-sdk-windows\samples\android-8\ApiDemos\src\com\example\android\apis\graphics\CameraPreview.java
it should be achievable. Do not forget to declare permissions in your manifest, as descibed in the Camera SDK documentation.