How to detect eye blink using Google vision API in android ? - android

i'm using the vision API for face detection, now i want to implement eye blink but still vision api detect eye blinking in image(photo) of a person(not live).
In addition, I am using a Tracker to keep track of the eye state over time, to detect the sequence of events that indicate a blink of left eye:
left eyes open -> left eyes closed -> left eyes open
The GraphicFaceTracker class is defined as below :
private class GraphicFaceTracker extends Tracker<Face> {
private GraphicOverlay mOverlay;
private FaceGraphic mFaceGraphic;
private Context context ;
GraphicFaceTracker(Context context, GraphicOverlay overlay) {
mOverlay = overlay;
this.context= context;
mFaceGraphic = new FaceGraphic(overlay);
}
private final float OPEN_THRESHOLD = 0.85f;
private final float CLOSE_THRESHOLD = 0.4f;
private int state = 0;
void blink(float value, final int eyeNo, String whichEye) {
switch (state) {
case 0:
if (value > OPEN_THRESHOLD) {
// Both eyes are initially open
state = 1;
}
break;
case 1:
if (value < CLOSE_THRESHOLD ) {
// Both eyes become closed
state = 2;
}
break;
case 2:
if (value > OPEN_THRESHOLD) {
// Both eyes are open again
Log.i("BlinkTracker", "blink occurred!");
mCameraSource.takePicture(null, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Log.d("BITMAP", bmp.getWidth() + "x" + bmp.getHeight());
System.out.println(bmp.getWidth() + "x" + bmp.getHeight());
}
});
state = 0;
}
break;
}
}
/**
* Start tracking the detected face instance within the face overlay.
*/
#Override
public void onNewItem(int faceId, Face item) {
mFaceGraphic.setId(faceId);
}
/**
* Update the position/characteristics of the face within the overlay.
*/
#Override
public void onUpdate(FaceDetector.Detections<Face> detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
mFaceGraphic.updateFace(face);
float left = face.getIsLeftEyeOpenProbability();
float right = face.getIsRightEyeOpenProbability();
if (left == Face.UNCOMPUTED_PROBABILITY) {
// At least one of the eyes was not detected.
return;
}
blink(left,0,"left");
if(right == Face.UNCOMPUTED_PROBABILITY ){
return ;
}
}
}
I have enabled "classifications" in order to have the detector indicate if eyes are open/closed :
FaceDetector detector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true) // optimize for single, relatively large face
.setTrackingEnabled(true) // enable face tracking
.setClassificationType(/* eyes open and smile */ FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.FAST_MODE) // for one face this is OK
.build();
The tracker is then added as a processor for receiving face updates over time from the detector. For example, this configuration would be used to track whether the largest face in view has blinked:
Tracker<Face> tracker = new GraphicFaceTracker(this,mGraphicOverlay);
detector.setProcessor(new LargestFaceFocusingProcessor.Builder(detector, tracker).build());
But the above code detects blink in image of a person . But the image of a person cannot blink . How can I detect blink by camera ?

From Face object you can get below probability.
float leftOpenScore = face.getIsLeftEyeOpenProbability();
if (leftOpenScore == Face.UNCOMPUTED_PROBABILITY) {//left eye is open }else{//left eye closed }
float leftOpenScore = face.getIsRightEyeOpenProbability();
if (leftOpenScore == Face.UNCOMPUTED_PROBABILITY) {//Right eye is open }else{//Right eye closed }
Now you can pass this value where you want to use.

You can pass your detector to camera source and process blink detection from the surface view.
public class LivelinessScanFragment extends Fragment {
SurfaceView cameraView;
CameraSource cameraSource;
final int RequestCameraPermissionID = 1001;
FaceDetector detector;
#Override
public void onRequestPermissionsResult(int requestCode, #NonNull String[] permissions, #NonNull int[] grantResults) {
switch (requestCode) {
case RequestCameraPermissionID: {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
if (ActivityCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
return;
}
try {
cameraSource.start(cameraView.getHolder());
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
public LivelinessScanFragment() {
// Required empty public constructor
}
#Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View rootView = inflater.inflate(R.layout.fragment_liveliness_scan, container, false);
cameraView = (SurfaceView)rootView.findViewById(R.id.surface_view);
detector = new FaceDetector.Builder(getActivity())
.setProminentFaceOnly(true) // optimize for single, relatively large face
.setTrackingEnabled(true) // enable face tracking
.setClassificationType(/* eyes open and smile */ FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.FAST_MODE) // for one face this is OK
.build();
if (!detector.isOperational()) {
Log.w("MainActivity", "Detector Dependencies are not yet available");
} else {
cameraSource = new CameraSource.Builder(Application.getContext(), detector)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(2.0f)
.setRequestedPreviewSize(1280, 1024)
.setAutoFocusEnabled(true)
.build();
cameraView.getHolder().addCallback(new SurfaceHolder.Callback() {
#Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
try {
if (ActivityCompat.checkSelfPermission(Application.getContext(), Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(getActivity(),
new String[]{Manifest.permission.CAMERA}, RequestCameraPermissionID);
return;
}
cameraSource.start(cameraView.getHolder());
detector.setProcessor(
new LargestFaceFocusingProcessor(detector, new GraphicFaceTracker()));
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder surfaceHolder, int i, int i1, int i2) {
}
#Override
public void surfaceDestroyed(SurfaceHolder surfaceHolder) {
cameraSource.stop();
}
});
}
return rootView;
}
private class GraphicFaceTracker extends Tracker<Face> {
private final float OPEN_THRESHOLD = 0.85f;
private final float CLOSE_THRESHOLD = 0.4f;
private int state = 0;
void blink(float value) {
switch (state) {
case 0:
if (value > OPEN_THRESHOLD) {
// Both eyes are initially open
state = 1;
}
break;
case 1:
if (value < CLOSE_THRESHOLD ) {
// Both eyes become closed
state = 2;
}
break;
case 2:
if (value > OPEN_THRESHOLD) {
// Both eyes are open again
Log.i("BlinkTracker", "blink occurred!");
state = 0;
}
break;
}
}
/**
* Update the position/characteristics of the face within the overlay.
*/
#Override
public void onUpdate(FaceDetector.Detections<Face> detectionResults, Face face) {
float left = face.getIsLeftEyeOpenProbability();
float right = face.getIsRightEyeOpenProbability();
if ((left == Face.UNCOMPUTED_PROBABILITY) ||
(right == Face.UNCOMPUTED_PROBABILITY)) {
// One of the eyes was not detected.
return;
}
float value = Math.min(left, right);
blink(value);
}
}
}

Here is a Github project open source eye blink detector for Android that detects eye blinks in real time in Android which is implemented on top of FaceDetectorApi

I think that looks about right. If you associate the detector with a running CameraSource instance, like in this example:
https://developers.google.com/vision/android/face-tracker-tutorial
that would track the eye motion from the video camera. I also think you might change the onUpdate code a little to better decide the blink threshold:
#Override
public void onUpdate(FaceDetector.Detections<Face> detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
mFaceGraphic.updateFace(face);
float left = face.getIsLeftEyeOpenProbability();
float right = face.getIsRightEyeOpenProbability();
if ((left == Face.UNCOMPUTED_PROBABILITY) ||
(right == Face.UNCOMPUTED_PROBABILITY)) {
// One of the eyes was not detected.
return;
}
float value = Math.min(left, right);
blink(value);
}

Related

How to fit video in Live wallpaper, by center-crop and by fitting to width/height?

Background
I'm making a live wallpaper that can show a video. In the beginning I thought this is going to be very hard, so some people suggested using OpenGL solutions or other, very complex solutions (such as this one).
Anyway, for this, I've found various places talking about it, and based on this github library (which has some bugs), I finally got it to work.
The problem
While I've succeeded showing a video, I can't find the way to control how it's shown compared to the screen resolution.
Currently it always gets to be stretched to the screen size, meaning that this (video taken from here) :
gets to show as this:
Reason is the different aspect ratio : 560x320 (video resolution) vs 1080x1920 (device resolution).
Note: I'm well aware of solutions of scaling videos, that are available on various Github repositories (such as here), but I'm asking about a live wallpaper. As such, it doesn't have a View, so it's more limited about how to do things. To be more specifically, a solution can't have any kind of layout, a TextureView or a SurfaceView, or any other kind of View.
What I've tried
I tried to play with various fields and functions of the SurfaceHolder, but with no luck so far. Examples:
setVideoScalingMode - it either crashes or doesn't do anything.
changing surfaceFrame - same.
Here's the current code I've made (full project available here) :
class MovieLiveWallpaperService : WallpaperService() {
override fun onCreateEngine(): WallpaperService.Engine {
return VideoLiveWallpaperEngine()
}
private enum class PlayerState {
NONE, PREPARING, READY, PLAYING
}
inner class VideoLiveWallpaperEngine : WallpaperService.Engine() {
private var mp: MediaPlayer? = null
private var playerState: PlayerState = PlayerState.NONE
override fun onSurfaceCreated(holder: SurfaceHolder) {
super.onSurfaceCreated(holder)
Log.d("AppLog", "onSurfaceCreated")
mp = MediaPlayer()
val mySurfaceHolder = MySurfaceHolder(holder)
mp!!.setDisplay(mySurfaceHolder)
mp!!.isLooping = true
mp!!.setVolume(0.0f, 0.0f)
mp!!.setOnPreparedListener { mp ->
playerState = PlayerState.READY
setPlay(true)
}
try {
//mp!!.setDataSource(this#MovieLiveWallpaperService, Uri.parse("http://techslides.com/demos/sample-videos/small.mp4"))
mp!!.setDataSource(this#MovieLiveWallpaperService, Uri.parse("android.resource://" + packageName + "/" + R.raw.small))
} catch (e: Exception) {
}
}
override fun onDestroy() {
super.onDestroy()
Log.d("AppLog", "onDestroy")
if (mp == null)
return
mp!!.stop()
mp!!.release()
playerState = PlayerState.NONE
}
private fun setPlay(play: Boolean) {
if (mp == null)
return
if (play == mp!!.isPlaying)
return
when {
!play -> {
mp!!.pause()
playerState = PlayerState.READY
}
mp!!.isPlaying -> return
playerState == PlayerState.READY -> {
Log.d("AppLog", "ready, so starting to play")
mp!!.start()
playerState = PlayerState.PLAYING
}
playerState == PlayerState.NONE -> {
Log.d("AppLog", "not ready, so preparing")
mp!!.prepareAsync()
playerState = PlayerState.PREPARING
}
}
}
override fun onVisibilityChanged(visible: Boolean) {
super.onVisibilityChanged(visible)
Log.d("AppLog", "onVisibilityChanged:" + visible + " " + playerState)
if (mp == null)
return
setPlay(visible)
}
}
class MySurfaceHolder(private val surfaceHolder: SurfaceHolder) : SurfaceHolder {
override fun addCallback(callback: SurfaceHolder.Callback) = surfaceHolder.addCallback(callback)
override fun getSurface() = surfaceHolder.surface!!
override fun getSurfaceFrame() = surfaceHolder.surfaceFrame
override fun isCreating(): Boolean = surfaceHolder.isCreating
override fun lockCanvas(): Canvas = surfaceHolder.lockCanvas()
override fun lockCanvas(dirty: Rect): Canvas = surfaceHolder.lockCanvas(dirty)
override fun removeCallback(callback: SurfaceHolder.Callback) = surfaceHolder.removeCallback(callback)
override fun setFixedSize(width: Int, height: Int) = surfaceHolder.setFixedSize(width, height)
override fun setFormat(format: Int) = surfaceHolder.setFormat(format)
override fun setKeepScreenOn(screenOn: Boolean) {}
override fun setSizeFromLayout() = surfaceHolder.setSizeFromLayout()
override fun setType(type: Int) = surfaceHolder.setType(type)
override fun unlockCanvasAndPost(canvas: Canvas) = surfaceHolder.unlockCanvasAndPost(canvas)
}
}
The questions
I'd like to know how to adjust the scale the content based on what we have for ImageView, all while keeping the aspect ratio :
center-crop - fits to 100% of the container (the screen in this case), cropping on sides (top&bottom or left&right) when needed. Doesn't stretch anything. This means the content seems fine, but not all of it might be shown.
fit-center - stretch to fit width/height
center-inside - set as original size, centered, and stretch to fit width/height only if too large.
You can achieve this with a TextureView. (surfaceView won't work either).I have found some code which will help you for achieving this.
in this demo you can crop the video in three type center, top & bottom.
TextureVideoView.java
public class TextureVideoView extends TextureView implements TextureView.SurfaceTextureListener {
// Indicate if logging is on
public static final boolean LOG_ON = true;
// Log tag
private static final String TAG = TextureVideoView.class.getName();
private MediaPlayer mMediaPlayer;
private float mVideoHeight;
private float mVideoWidth;
private boolean mIsDataSourceSet;
private boolean mIsViewAvailable;
private boolean mIsVideoPrepared;
private boolean mIsPlayCalled;
private ScaleType mScaleType;
private State mState;
public enum ScaleType {
CENTER_CROP, TOP, BOTTOM
}
public enum State {
UNINITIALIZED, PLAY, STOP, PAUSE, END
}
public TextureVideoView(Context context) {
super(context);
initView();
}
public TextureVideoView(Context context, AttributeSet attrs) {
super(context, attrs);
initView();
}
public TextureVideoView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
initView();
}
private void initView() {
initPlayer();
setScaleType(ScaleType.CENTER_CROP);
setSurfaceTextureListener(this);
}
public void setScaleType(ScaleType scaleType) {
mScaleType = scaleType;
}
private void updateTextureViewSize() {
float viewWidth = getWidth();
float viewHeight = getHeight();
float scaleX = 1.0f;
float scaleY = 1.0f;
if (mVideoWidth > viewWidth && mVideoHeight > viewHeight) {
scaleX = mVideoWidth / viewWidth;
scaleY = mVideoHeight / viewHeight;
} else if (mVideoWidth < viewWidth && mVideoHeight < viewHeight) {
scaleY = viewWidth / mVideoWidth;
scaleX = viewHeight / mVideoHeight;
} else if (viewWidth > mVideoWidth) {
scaleY = (viewWidth / mVideoWidth) / (viewHeight / mVideoHeight);
} else if (viewHeight > mVideoHeight) {
scaleX = (viewHeight / mVideoHeight) / (viewWidth / mVideoWidth);
}
// Calculate pivot points, in our case crop from center
int pivotPointX;
int pivotPointY;
switch (mScaleType) {
case TOP:
pivotPointX = 0;
pivotPointY = 0;
break;
case BOTTOM:
pivotPointX = (int) (viewWidth);
pivotPointY = (int) (viewHeight);
break;
case CENTER_CROP:
pivotPointX = (int) (viewWidth / 2);
pivotPointY = (int) (viewHeight / 2);
break;
default:
pivotPointX = (int) (viewWidth / 2);
pivotPointY = (int) (viewHeight / 2);
break;
}
Matrix matrix = new Matrix();
matrix.setScale(scaleX, scaleY, pivotPointX, pivotPointY);
setTransform(matrix);
}
private void initPlayer() {
if (mMediaPlayer == null) {
mMediaPlayer = new MediaPlayer();
} else {
mMediaPlayer.reset();
}
mIsVideoPrepared = false;
mIsPlayCalled = false;
mState = State.UNINITIALIZED;
}
/**
* #see MediaPlayer#setDataSource(String)
*/
public void setDataSource(String path) {
initPlayer();
try {
mMediaPlayer.setDataSource(path);
mIsDataSourceSet = true;
prepare();
} catch (IOException e) {
Log.d(TAG, e.getMessage());
}
}
/**
* #see MediaPlayer#setDataSource(Context, Uri)
*/
public void setDataSource(Context context, Uri uri) {
initPlayer();
try {
mMediaPlayer.setDataSource(context, uri);
mIsDataSourceSet = true;
prepare();
} catch (IOException e) {
Log.d(TAG, e.getMessage());
}
}
/**
* #see MediaPlayer#setDataSource(java.io.FileDescriptor)
*/
public void setDataSource(AssetFileDescriptor afd) {
initPlayer();
try {
long startOffset = afd.getStartOffset();
long length = afd.getLength();
mMediaPlayer.setDataSource(afd.getFileDescriptor(), startOffset, length);
mIsDataSourceSet = true;
prepare();
} catch (IOException e) {
Log.d(TAG, e.getMessage());
}
}
private void prepare() {
try {
mMediaPlayer.setOnVideoSizeChangedListener(
new MediaPlayer.OnVideoSizeChangedListener() {
#Override
public void onVideoSizeChanged(MediaPlayer mp, int width, int height) {
mVideoWidth = width;
mVideoHeight = height;
updateTextureViewSize();
}
}
);
mMediaPlayer.setOnCompletionListener(new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
mState = State.END;
log("Video has ended.");
if (mListener != null) {
mListener.onVideoEnd();
}
}
});
// don't forget to call MediaPlayer.prepareAsync() method when you use constructor for
// creating MediaPlayer
mMediaPlayer.prepareAsync();
// Play video when the media source is ready for playback.
mMediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mediaPlayer) {
mIsVideoPrepared = true;
if (mIsPlayCalled && mIsViewAvailable) {
log("Player is prepared and play() was called.");
play();
}
if (mListener != null) {
mListener.onVideoPrepared();
}
}
});
} catch (IllegalArgumentException e) {
Log.d(TAG, e.getMessage());
} catch (SecurityException e) {
Log.d(TAG, e.getMessage());
} catch (IllegalStateException e) {
Log.d(TAG, e.toString());
}
}
/**
* Play or resume video. Video will be played as soon as view is available and media player is
* prepared.
*
* If video is stopped or ended and play() method was called, video will start over.
*/
public void play() {
if (!mIsDataSourceSet) {
log("play() was called but data source was not set.");
return;
}
mIsPlayCalled = true;
if (!mIsVideoPrepared) {
log("play() was called but video is not prepared yet, waiting.");
return;
}
if (!mIsViewAvailable) {
log("play() was called but view is not available yet, waiting.");
return;
}
if (mState == State.PLAY) {
log("play() was called but video is already playing.");
return;
}
if (mState == State.PAUSE) {
log("play() was called but video is paused, resuming.");
mState = State.PLAY;
mMediaPlayer.start();
return;
}
if (mState == State.END || mState == State.STOP) {
log("play() was called but video already ended, starting over.");
mState = State.PLAY;
mMediaPlayer.seekTo(0);
mMediaPlayer.start();
return;
}
mState = State.PLAY;
mMediaPlayer.start();
}
/**
* Pause video. If video is already paused, stopped or ended nothing will happen.
*/
public void pause() {
if (mState == State.PAUSE) {
log("pause() was called but video already paused.");
return;
}
if (mState == State.STOP) {
log("pause() was called but video already stopped.");
return;
}
if (mState == State.END) {
log("pause() was called but video already ended.");
return;
}
mState = State.PAUSE;
if (mMediaPlayer.isPlaying()) {
mMediaPlayer.pause();
}
}
/**
* Stop video (pause and seek to beginning). If video is already stopped or ended nothing will
* happen.
*/
public void stop() {
if (mState == State.STOP) {
log("stop() was called but video already stopped.");
return;
}
if (mState == State.END) {
log("stop() was called but video already ended.");
return;
}
mState = State.STOP;
if (mMediaPlayer.isPlaying()) {
mMediaPlayer.pause();
mMediaPlayer.seekTo(0);
}
}
/**
* #see MediaPlayer#setLooping(boolean)
*/
public void setLooping(boolean looping) {
mMediaPlayer.setLooping(looping);
}
/**
* #see MediaPlayer#seekTo(int)
*/
public void seekTo(int milliseconds) {
mMediaPlayer.seekTo(milliseconds);
}
/**
* #see MediaPlayer#getDuration()
*/
public int getDuration() {
return mMediaPlayer.getDuration();
}
static void log(String message) {
if (LOG_ON) {
Log.d(TAG, message);
}
}
private MediaPlayerListener mListener;
/**
* Listener trigger 'onVideoPrepared' and `onVideoEnd` events
*/
public void setListener(MediaPlayerListener listener) {
mListener = listener;
}
public interface MediaPlayerListener {
public void onVideoPrepared();
public void onVideoEnd();
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int width, int height) {
Surface surface = new Surface(surfaceTexture);
mMediaPlayer.setSurface(surface);
mIsViewAvailable = true;
if (mIsDataSourceSet && mIsPlayCalled && mIsVideoPrepared) {
log("View is available and play() was called.");
play();
}
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
return false;
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
}
After that use this class like the below code in MainActivity.java
public class MainActivity extends AppCompatActivity implements View.OnClickListener,
ActionBar.OnNavigationListener {
// Video file url
private static final String FILE_URL = "http://techslides.com/demos/sample-videos/small.mp4";
private TextureVideoView mTextureVideoView;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initView();
initActionBar();
if (!isWIFIOn(getBaseContext())) {
Toast.makeText(getBaseContext(), "You need internet connection to stream video",
Toast.LENGTH_LONG).show();
}
}
private void initActionBar() {
ActionBar actionBar = getSupportActionBar();
actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_LIST);
actionBar.setDisplayShowTitleEnabled(false);
SpinnerAdapter mSpinnerAdapter = ArrayAdapter.createFromResource(this, R.array.action_list,
android.R.layout.simple_spinner_dropdown_item);
actionBar.setListNavigationCallbacks(mSpinnerAdapter, this);
}
private void initView() {
mTextureVideoView = (TextureVideoView) findViewById(R.id.cropTextureView);
findViewById(R.id.btnPlay).setOnClickListener(this);
findViewById(R.id.btnPause).setOnClickListener(this);
findViewById(R.id.btnStop).setOnClickListener(this);
}
#Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.btnPlay:
mTextureVideoView.play();
break;
case R.id.btnPause:
mTextureVideoView.pause();
break;
case R.id.btnStop:
mTextureVideoView.stop();
break;
}
}
final int indexCropCenter = 0;
final int indexCropTop = 1;
final int indexCropBottom = 2;
#Override
public boolean onNavigationItemSelected(int itemPosition, long itemId) {
switch (itemPosition) {
case indexCropCenter:
mTextureVideoView.stop();
mTextureVideoView.setScaleType(TextureVideoView.ScaleType.CENTER_CROP);
mTextureVideoView.setDataSource(FILE_URL);
mTextureVideoView.play();
break;
case indexCropTop:
mTextureVideoView.stop();
mTextureVideoView.setScaleType(TextureVideoView.ScaleType.TOP);
mTextureVideoView.setDataSource(FILE_URL);
mTextureVideoView.play();
break;
case indexCropBottom:
mTextureVideoView.stop();
mTextureVideoView.setScaleType(TextureVideoView.ScaleType.BOTTOM);
mTextureVideoView.setDataSource(FILE_URL);
mTextureVideoView.play();
break;
}
return true;
}
public static boolean isWIFIOn(Context context) {
ConnectivityManager connMgr =
(ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo networkInfo = connMgr.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
return (networkInfo != null && networkInfo.isConnected());
}
}
and layout activity_main.xml file for that is below
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<com.example.videocropdemo.crop.TextureVideoView
android:id="#+id/cropTextureView"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_centerInParent="true" />
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_margin="16dp"
android:orientation="horizontal">
<Button
android:id="#+id/btnPlay"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Play" />
<Button
android:id="#+id/btnPause"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Pause" />
<Button
android:id="#+id/btnStop"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Stop" />
</LinearLayout>
</RelativeLayout>
Output of the code for center crop look like
So I wasn't yet able to get all scale types that you've asked but I've been able to get fit-xy and center-crop working fairly easily using exo player. The full code can be seen at https://github.com/yperess/StackOverflow/tree/50091878 and I'll update it as I get more. Eventually I'll also fill the MainActivity to allow you to choose the scaling type as the settings (I'll do this with a simple PreferenceActivity) and read the shared preferences value on the service side.
The overall idea is that deep down MediaCodec already implements both fit-xy and center-crop which are really the only 2 modes you would need if you had access to a view hierarchy. This is the case because fit-center, fit-top, fit-bottom would all really just be fit-xy where the surface has a gravity and is scaled to match the video size * minimum scaling. To get these working what I believe will need to happen is we'd need to create an OpenGL context and provide a SurfaceTexture. This SurfaceTexture can be wrapped with a stub Surface which can be passed to exo player. Once the video is loaded we can set the size of these since we created them. We also have a callback on SurfaceTexture to let us know when a frame is ready. At this point we should be able to modify the frame (hopefully just using a simple matrix scale and transform).
The key components here are creating the exo player:
private fun initExoMediaPlayer(): SimpleExoPlayer {
val videoTrackSelectionFactory = AdaptiveTrackSelection.Factory(bandwidthMeter)
val trackSelector = DefaultTrackSelector(videoTrackSelectionFactory)
val player = ExoPlayerFactory.newSimpleInstance(this#MovieLiveWallpaperService,
trackSelector)
player.playWhenReady = true
player.repeatMode = Player.REPEAT_MODE_ONE
player.volume = 0f
if (mode == Mode.CENTER_CROP) {
player.videoScalingMode = C.VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING
} else {
player.videoScalingMode = C.VIDEO_SCALING_MODE_SCALE_TO_FIT
}
if (mode == Mode.FIT_CENTER) {
player.addVideoListener(this)
}
return player
}
Then loading the video:
override fun onSurfaceCreated(holder: SurfaceHolder) {
super.onSurfaceCreated(holder)
if (mode == Mode.FIT_CENTER) {
// We need to somehow wrap the surface or set some scale factor on exo player here.
// Most likely this will require creating a SurfaceTexture and attaching it to an
// OpenGL context. Then for each frame, writing it to the original surface but with
// an offset
exoMediaPlayer.setVideoSurface(holder.surface)
} else {
exoMediaPlayer.setVideoSurfaceHolder(holder)
}
val videoUri = RawResourceDataSource.buildRawResourceUri(R.raw.small)
val dataSourceFactory = DataSource.Factory { RawResourceDataSource(context) }
val mediaSourceFactory = ExtractorMediaSource.Factory(dataSourceFactory)
exoMediaPlayer.prepare(mediaSourceFactory.createMediaSource(videoUri))
}
UPDATE:
Got it working, I'll need tomorrow to clean it up before I post the code but here's a sneak preview...
What I ended up doing it basically taking GLSurfaceView and ripping it apart. If you look at the source for it the only thing missing that's making it impossible to use in a wallpaper is the fact that it only starts the GLThread when attached to the window. So if you replicate the same code but allow to manually start the GLThread you can go ahead. After that you just need to keep track of how big your screen is vs the video after scaling to the minimum scale that would fit and shift the quad on which you draw to.
Known issues with the code:
1. There's a small bug with the GLThread I haven't been able to fish out. Seems like there's a simple timing issue where when the thread pauses I get a call to signallAll() that's not actually waiting on anything.
2. I didn't bother dynamically modifying the mode in the renderer. It shouldn't be too hard. Add a preference listener when creating the Engine then update the renderer when scale_type changes.
UPDATE:
All issues have been resolved. signallAll() was throwing because I missed a check to see that we actually have the lock. I also added a listener to update the scale type dynamically so now all scale types use the GlEngine.
ENJOY!
I find this article: How to set video as live wallpaper and keep video aspect ratio(width and height)
above article has simple source, just click "set wallpaper" button, if you wanna full feature app, see https://github.com/AlynxZhou/alynx-live-wallpaper
the keypoint is use glsurfaceview instead of wallpaperservice default surfaceview, make custom glsurfaceview renderer, glsurfaceview can use opengl to display, so the question become "how to use glsurfaceview play video" or "how to use opengl play video"
how to use glsurfaceview instead of wallpaperservice default surfaceview:
public class GLWallpaperService extends WallpaperService {
...
class GLWallpaperEngine extends Engine {
...
private class GLWallpaperSurfaceView extends GLSurfaceView {
#SuppressWarnings("unused")
private static final String TAG = "GLWallpaperSurface";
public GLWallpaperSurfaceView(Context context) {
super(context);
}
/**
* This is a hack. Because Android Live Wallpaper only has a Surface.
* So we create a GLSurfaceView, and when drawing to its Surface,
* we replace it with WallpaperEngine's Surface.
*/
#Override
public SurfaceHolder getHolder() {
return getSurfaceHolder();
}
void onDestroy() {
super.onDetachedFromWindow();
}
}
my solution is use gif(size and fps same with video) instead of video in live wallpaper
see my answer: https://stackoverflow.com/a/60425717/6011193, WallpaperService can best fit gif
convert video to gif in computer with ffmpeg
or in android, video can be converted to gif in android code: see https://stackoverflow.com/a/16749143/6011193
You can use Glide for GIF and image loading and its give scaling options as you like. Based on document https://bumptech.github.io/glide/doc/targets.html#sizes-and-dimensions and https://futurestud.io/tutorials/glide-image-resizing-scaling this.
Glide v4 requires Android Ice Cream Sandwich (API level 14) or higher.
Like :
public static void loadCircularImageGlide(String imagePath, ImageView view) {
Glide.with(view.getContext())
.load(imagePath)
.asGif()
.override(600, 200) // resizes the image to these dimensions (in pixel). resize does not respect aspect ratio
.error(R.drawable.create_timeline_placeholder)
.fitCenter() // scaling options
.transform(new CircularTransformation(view.getContext())) // Even you can Give image tranformation too
.into(view);
}

Rotate CustomView without affecting the activity

Hi I have a floating window in which my floating window serves as the video and it also have a controls inside the video which under the floating window
Now my question is it possible that I can rotate my custom view from floating window only without affecting the orientation of my activity
If someone already tried it please guide me towards it.
Thank you.
I have never done it before. But I have an idea. You can use the below code to get the current orientation of the screen on your device.
OrientationEventListener onrientationEventListener = new OrientationEventListener(context, SensorManager.SENSOR_DELAY_UI) {
#Override
public void onOrientationChanged(int rotation) {
Logger.e("Orientation: " + rotation);
}
};
And after that, depends on the value of "rotation", you can use rotate animation in Android to rotate your custom view.
#Beginer: Here is the code I implemented it. I used the above code in my custom view in my small camera app. It helps me to know which degree should I rotate the bitmap after taken. The toScreenOrientation() method bellow return a value in degrees (0, 90, 180, 270) you also modify it by yourself(whatever you want).
Using setOrientationChangedListener() method to help the parent(Activity, Fragment, etc.) receives a callback also.
public class TakePhotoView extends ConstraintLayout {
private static final int SCREEN_ORIENTATION_0 = 0;
private static final int SCREEN_ORIENTATION_90 = 90;
private static final int SCREEN_ORIENTATION_180 = 180;
private static final int SCREEN_ORIENTATION_270 = 270;
private OnOrientationChangedListener mOnOrientationChangedListener;
private OrientationEventListener mOrientationEventListener;
private int mScreenRotation;
public TakePhotoView(Context context) {
this(context, null);
}
public TakePhotoView(Context context, AttributeSet attrs) {
super(context, attrs);
//....do something here
mOrientationEventListener = new OrientationEventListener(context, SensorManager.SENSOR_DELAY_UI) {
#Override
public void onOrientationChanged(int rotation) {
Logger.e("Orientation: " + rotation);
if (rotation == OrientationEventListener.ORIENTATION_UNKNOWN) {
mScreenRotation = DEFAULT_SCREEN_ROTATION;
return;
}
mScreenRotation = rotation;
if(mOnOrientationChangedListener != null){
mOnOrientationChangedListener.onOrientationChanged(rotation);
}
}
};
}
private void takePhoto(Camera camera) {
if (camera != null) {
camera.takePicture(null, null, mPictureCallback);
}
}
private Camera.PictureCallback mPictureCallback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
final int screenRotation = mScreenRotation;
int rotate = toScreenOrientation(screenRotation);
// Rotate/Flip the bitmap depends on the 'rotate' value
}
};
/**
* Converts sensor rotation in degrees to screen orientation constants.
*
* #param rotation sensor rotation angle in degrees
* #return Screen orientation angle in degrees (0, 90, 180, 270).
*/
private int toScreenOrientation(int rotation) {
if (rotation > 290 || rotation <= 70) {
return SCREEN_ORIENTATION_0;
} else if (rotation > 70 && rotation <= 110) {
return SCREEN_ORIENTATION_90;
} else if (rotation > 110 && rotation <= 250) {
return SCREEN_ORIENTATION_180;
} else {
return SCREEN_ORIENTATION_270;
}
}
public void setOrientationChangedListener(OnOrientationChangedListener orientationChangedListener){
this.mOnOrientationChangedListener = orientationChangedListener;
}
////////////////////////////////////////////////////////////////////////////////////////////////
// Interfaces
////////////////////////////////////////////////////////////////////////////////////////////////
public interface OnOrientationChangedListener {
void onOrientationChanged(int rotation);
}
}
Can I use this listener even I disable rotation in my manifest?
-> It still works fine.
Hope it helps.
Looks like you do not want your Activity to be recreated on device rotation.
If so, then add configChanges attribute in AndroidManifest:
<activity
...
android:configChanges="orientation" >
This will stop activity recreation on rotation. But in your activity you can check that device has been rotated in onConfigurationChanged() method:
#Override
public void onConfigurationChanged(Configuration newConfig) {
super.onConfigurationChanged(newConfig);
// Checks the orientation of the screen
if (newConfig.orientation == Configuration.ORIENTATION_LANDSCAPE) {
...
} else if (newConfig.orientation == Configuration.ORIENTATION_PORTRAIT){
...
}
}
Do not forget to read android developer guides. ;)
I have found a way to do this but I used alertDailog. The logic will be the same for all views.
AlertDialog dialog;
#Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.stack_overflow2);
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
AlertDialog.Builder alert123 = new AlertDialog.Builder(StackOverflow2.this);
View current_view = getLayoutInflater().inflate(R.layout.password_alert,null);
l2 = current_view.findViewById(R.id.linearView);
// Here l2 is linear layout getting root layout of password_alert
alert123.setView(current_view);
dialog = alert123.create();
dialog.show();
OrientationEventListener onrientationEventListener = new OrientationEventListener(getBaseContext(), SensorManager.SENSOR_DELAY_UI) {
#Override
public void onOrientationChanged(int rotation) {
Log.e("Orientation: " , String.valueOf(rotation));
if(rotation==270 || rotation==90)
{
if(rotation==270)
{
l2.setRotation(90);
}
else
{
l2.setRotation(270);
}
dialog.getWindow().setLayout(ViewGroup.LayoutParams.WRAP_CONTENT,ViewGroup.LayoutParams.WRAP_CONTENT);
Toast.makeText(StackOverflow2.this, "Change Dialog Rotation", Toast.LENGTH_SHORT).show();
}
else
{
if(rotation==180)
{
l2.setRotation(180);
}
else
{
l2.setRotation(0);
}
Toast.makeText(StackOverflow2.this, "Normal Display to Occur", Toast.LENGTH_SHORT).show();
}
}
};
if (onrientationEventListener.canDetectOrientation()) {
Log.v("ORIE", "Can detect orientation");
onrientationEventListener.enable();
} else {
Log.v("ORIE", "Cannot detect orientation");
onrientationEventListener.disable();
}
}
Here are some pictures :
0 Degree
90 Degree
270 Degree
As you can see the background activity is in potrait mode always. The alert dialog's height and width is a little off but you can change your view's dimensions differently. I hope this code solves your problem.

Why cant I use shake listener in ontouch event?

I have a problem when I am running my android app.
The shake listener cannot be called when I touch down.
My logic is "Send message when I am shaking and touch down button at the same time. But it seems that cannot work for my listener part.
But the android studio cant tell me where is the error or code wrongly.
Here is my button code.
shakeitBtn.setOnTouchListener(new View.OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
int action = event.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN:
Toast.makeText(getBaseContext(), "Shake!", Toast.LENGTH_SHORT).show();
mShakeDetector = new ShakeDetector(new ShakeDetector.OnShakeListener() {
public void onShake() {
Vibrator v = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE);
v.vibrate(300);
status = RELEASE_TO_SEND;
}
});
break;
case MotionEvent.ACTION_UP:
if (status == RELEASE_TO_CANCEL) {
Toast.makeText(getBaseContext(), "Shake canceled", Toast.LENGTH_SHORT).show();
} else {
if (status == RELEASE_TO_SEND) {
DatabaseReference childRoot = rootRoomName.push();
Map<String, Object> map = new HashMap<String, Object>();
map.put("name", userName);
map.put("message", "I AM BUSY!!!".toString());
childRoot.updateChildren(map);
}
else{}
}
break;
case MotionEvent.ACTION_MOVE:
if (event.getY() < 0) {
status = RELEASE_TO_CANCEL;
} else {
}
break;
default:
break;
}
return true;
}
});
#Override
public void onResume() {
chat_room.super.onResume();
mSensorManager.registerListener(mShakeDetector, mAccelerometer, SensorManager.SENSOR_DELAY_UI);
}
#Override
public void onPause() {
mSensorManager.unregisterListener(mShakeDetector);
chat_room.super.onPause();
}
Here is my share detector code
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
public class ShakeDetector implements SensorEventListener {
// Minimum acceleration needed to count as a shake movement
private static final int MIN_SHAKE_ACCELERATION = 5;
// Minimum number of movements to register a shake
private static final int MIN_MOVEMENTS = 2;
// Maximum time (in milliseconds) for the whole shake to occur
private static final int MAX_SHAKE_DURATION = 500;
// Arrays to store gravity and linear acceleration values
private float[] mGravity = {0.0f, 0.0f, 0.0f};
private float[] mLinearAcceleration = {0.0f, 0.0f, 0.0f};
// Indexes for x, y, and z values
private static final int X = 0;
private static final int Y = 1;
private static final int Z = 2;
// OnShakeListener that will be notified when the shake is detected
private OnShakeListener mShakeListener;
// Start time for the shake detection
long startTime = 0;
// Counter for shake movements
int moveCount = 0;
// Constructor that sets the shake listener
public ShakeDetector(OnShakeListener shakeListener) {
mShakeListener = shakeListener;
}
#Override
public void onSensorChanged(SensorEvent event) {
// This method will be called when the accelerometer detects a change.
// Call a helper method that wraps code from the Android developer site
setCurrentAcceleration(event);
// Get the max linear acceleration in any direction
float maxLinearAcceleration = getMaxCurrentLinearAcceleration();
// Check if the acceleration is greater than our minimum threshold
if (maxLinearAcceleration > MIN_SHAKE_ACCELERATION) {
long now = System.currentTimeMillis();
// Set the startTime if it was reset to zero
if (startTime == 0) {
startTime = now;
}
long elapsedTime = now - startTime;
// Check if we're still in the shake window we defined
if (elapsedTime > MAX_SHAKE_DURATION) {
// Too much time has passed. Start over!
resetShakeDetection();
} else {
// Keep track of all the movements
moveCount++;
// Check if enough movements have been made to qualify as a shake
if (moveCount > MIN_MOVEMENTS) {
// It's a shake! Notify the listener.
mShakeListener.onShake();
// Reset for the next one!
resetShakeDetection();
}
}
}
}
#Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
// Intentionally blank
}
private void setCurrentAcceleration(SensorEvent event) {
/*
* BEGIN SECTION from Android developer site. This code accounts for
* gravity using a high-pass filter
*/
// alpha is calculated as t / (t + dT)
// with t, the low-pass filter's time-constant
// and dT, the event delivery rate
final float alpha = 0.8f;
// Gravity components of x, y, and z acceleration
mGravity[X] = alpha * mGravity[X] + (1 - alpha) * event.values[X];
mGravity[Y] = alpha * mGravity[Y] + (1 - alpha) * event.values[Y];
mGravity[Z] = alpha * mGravity[Z] + (1 - alpha) * event.values[Z];
// Linear acceleration along the x, y, and z axes (gravity effects removed)
mLinearAcceleration[X] = event.values[X] - mGravity[X];
mLinearAcceleration[Y] = event.values[Y] - mGravity[Y];
mLinearAcceleration[Z] = event.values[Z] - mGravity[Z];
/*
* END SECTION from Android developer site
*/
}
private float getMaxCurrentLinearAcceleration() {
// Start by setting the value to the x value
float maxLinearAcceleration = mLinearAcceleration[X];
// Check if the y value is greater
if (mLinearAcceleration[Y] > maxLinearAcceleration) {
maxLinearAcceleration = mLinearAcceleration[Y];
}
// Check if the z value is greater
if (mLinearAcceleration[Z] > maxLinearAcceleration) {
maxLinearAcceleration = mLinearAcceleration[Z];
}
// Return the greatest value
return maxLinearAcceleration;
}
private void resetShakeDetection() {
startTime = 0;
moveCount = 0;
}
// (I'd normally put this definition in it's own .java file)
public interface OnShakeListener {
public void onShake();
}
}
As you said the ShakeDetector works if you start it before the onTouch() I suggest the following:
1) Create a parameter in your Activity/Fragment:
bool vibrateOnShake = false;
2) Init your ShakeDetector outside the onTouch():
mShakeDetector = new ShakeDetector(new ShakeDetector.OnShakeListener() {
public void onShake() {
if (vibrateOnShake) {
Vibrator v = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE);
v.vibrate(300);
status = RELEASE_TO_SEND;
}
}
});
3) Update your onTouch():
#Override
public boolean onTouch(View v, MotionEvent event) {
int action = event.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN:
vibrateOnShake = true;
...
break;
...
}
}

Create on click focus on CameraSource - Android QR Code detection

I'm developing an android app which allows user to check a QR Code content and execute something according read result.
In order to improve the performance i'd like to implement 2 methods:
onClickFocus (which allows user to focus the camera when screen is clicked)
turnOn/OFF flash (which allows user to turn on/off the flash)
I've done some digging and figured out that for manage camera and flash I need to be able to manage the Camera as object itself.
And here is where the nightmare begin.
I'm using the follow code to show camera result and track QR codes.
import android.app.FragmentTransaction;
import android.content.Context;
import android.os.Bundle;
import android.os.Vibrator;
import android.support.v4.widget.DrawerLayout;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.util.SparseArray;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.widget.TextView;
import com.google.android.gms.vision.Detector;
import com.google.android.gms.vision.barcode.Barcode;
import com.google.android.gms.vision.barcode.BarcodeDetector;
import java.io.IOException;
public class MainReadActivity extends AppCompatActivity {
public SurfaceView cameraView;
private TextView barcodeInfo;
public BarcodeDetector barcodeDetector;
public CameraSource cameraSource;
public Vibrator v;
public String textInfo;
public DrawerLayout mDrawerLayout;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main_read);
v = (Vibrator) getSystemService(Context.VIBRATOR_SERVICE);
mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout_main);
getSupportFragmentManager().findFragmentById(R.id.drawer_layout_main);
cameraView = (SurfaceView) findViewById(R.id.camera_view);
//barcodeInfo = (TextView) findViewById(R.id.code_info);
barcodeDetector = new BarcodeDetector.Builder(this)
.setBarcodeFormats(Barcode.QR_CODE)
.build();
cameraSource = new CameraSource.Builder(this, barcodeDetector).build();
cameraView.getHolder().addCallback(new SurfaceHolder.Callback() {
#Override
public void surfaceCreated(SurfaceHolder holder) {
try {
cameraSource.start(cameraView.getHolder());
} catch (IOException ie) {
Log.e("CAMERA SOURCE", ie.getMessage());
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
cameraSource.stop();
}
});
barcodeDetector.setProcessor(new Detector.Processor<Barcode>() {
#Override
public void release() {
}
#Override
public void receiveDetections(Detector.Detections<Barcode> detections) {
final SparseArray<Barcode> barcodes = detections.getDetectedItems();
if (barcodes.size() != 0) {
new Runnable() { // Use the post method of the TextView
public void run() {
v.vibrate(500);
// textInfo = barcodes.valueAt(0).displayValue;
MyFragmentDialog newf = new MyFragmentDialog();
FragmentTransaction transaction = getFragmentManager().beginTransaction();
transaction.replace(R.id.fragment_container, newf);
transaction.addToBackStack("tag");
transaction.commit();
}
};
}
}
});
}
public void onBackPressed() {
// do nothing
}
}
So, I need to get access to Camera, from CameraSource (am I right?!)
Once it is not possible, I tryed to use this CameraSource class from GoogleSamples's git which allows to use setFocusMode method... But unfortunately I wasn't successful.
I also tryed to use API 21, since API 22 no longer supports Camera and CameraPreferences.
I'm pretty sure this is not only my problem, but couldn't find a way to fix it.
Anyone can help?
FIXED:
Just use this CameraSource (github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java) . Yeah, I know, I've suggested it... But this time i fixed my problem! So, if you're going to use this, make sure your compile looks like this:
compile 'com.google.android.gms:play-services:8.1.0'
Initialize these and define them in OnCreate
Camera.Parameters params;
Camera camera;
CameraSource cameraSource;
SurfaceView cameraView;
boolean isFlash = false;
Call changeFlashStatus() method to turn flash ON and call it again to turn flash OFF
public void changeFlashStatus() {
Field[] declaredFields = CameraSource.class.getDeclaredFields();
for (Field field : declaredFields) {
if (field.getType() == Camera.class) {
field.setAccessible(true);
try {
camera = (Camera) field.get(cameraSource);
if (camera != null) {
params = camera.getParameters();
if (!isFlash) {
params.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
flashImage.setColorFilter(getResources().getColor(R.color.yellow));
isFlash = true;
} else {
params.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
flashImage.setColorFilter(getResources().getColor(R.color.greyLight));
isFlash = false;
}
camera.setParameters(params);
}
} catch (IllegalAccessException e) {
e.printStackTrace();
}
break;
}
}
}
To get the camera to focus, you need a specific area ( Rect ) to pass it to Camera to make focus on that area. So we have to implement onTouchListener() for surfaceView so when we touch the surfaceView we create MotionEvent which is determine where exactly you touch the surfaceView then we can extract Rect from MotionEvent.
Call initCameraFocusListener() in your OnCreate. Safely Call it after the start of the Camera
private void initCameraFocusListener() {
cameraView.setOnTouchListener(new View.OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
cameraFocus(event, cameraSource, Camera.Parameters.FOCUS_MODE_AUTO);
return false;
}
});
}
private boolean cameraFocus(MotionEvent event, #NonNull CameraSource cameraSource, #NonNull String focusMode) {
Field[] declaredFields = CameraSource.class.getDeclaredFields();
int pointerId = event.getPointerId(0);
int pointerIndex = event.findPointerIndex(pointerId);
// Get the pointer's current position
float x = event.getX(pointerIndex);
float y = event.getY(pointerIndex);
float touchMajor = event.getTouchMajor();
float touchMinor = event.getTouchMinor();
Rect touchRect = new Rect((int)(x - touchMajor / 2), (int)(y - touchMinor / 2), (int)(x + touchMajor / 2), (int)(y + touchMinor / 2));
Rect focusArea = new Rect();
focusArea.set(touchRect.left * 2000 / cameraView.getWidth() - 1000,
touchRect.top * 2000 / cameraView.getHeight() - 1000,
touchRect.right * 2000 / cameraView.getWidth() - 1000,
touchRect.bottom * 2000 / cameraView.getHeight() - 1000);
// Submit focus area to camera
ArrayList<Camera.Area> focusAreas = new ArrayList<Camera.Area>();
focusAreas.add(new Camera.Area(focusArea, 1000));
for (Field field : declaredFields) {
if (field.getType() == Camera.class) {
field.setAccessible(true);
try {
camera = (Camera) field.get(cameraSource);
if (camera != null) {
params = camera.getParameters();
params.setFocusMode(focusMode);
params.setFocusAreas(focusAreas);
camera.setParameters(params);
// Start the autofocus operation
camera.autoFocus(new Camera.AutoFocusCallback() {
#Override
public void onAutoFocus(boolean b, Camera camera) {
// currently set to auto-focus on single touch
}
});
return true;
}
return false;
} catch (IllegalAccessException e) {
e.printStackTrace();
}
break;
}
}
return false;
}
I use this library and it works really well, and its easy to implement
https://github.com/dm77/barcodescanner
Answered here: Google Vision API Samples: Get the CameraSource to Focus
To autofocus use
.setAutoFocusEnabled(true) on your CameraSource.Builder()

Animation in Live Wallpaper Android?

I am fairly new developer and trying to make a live wallpaper app. Among many animations, my first target is to show a rotating Bitmap which is actually a Black Hole.
public class Painting extends Thread {
/** Reference to the View and the context */
private SurfaceHolder surfaceHolder;
private Context context;
/** State */
private boolean wait;
private boolean run;
/** Dimensions */
private int width;
private int height;
/** Time tracking */
private long previousTime;
boolean first = true;
Bitmap hole;
int degree;
public Painting(Context con , SurfaceHolder surf)
{
context = con;
surfaceHolder = surf;
this.wait = true;
Log.i("Live Test","UnInitialized");
Drawable d = (con.getResources().getDrawable(R.drawable.vlack));
hole = ((BitmapDrawable)d).getBitmap();
hole.prepareToDraw();
if(hole != null)
Log.i("Live Test","Initialized");
run = true;wait = false;
degree = 0;
}
#Override
public void run()
{
while (run) {
this.run = true;
Canvas c = null;
Log.i("Live Test","Draw Color");
while (run) {
try {
c = this.surfaceHolder.lockCanvas();
synchronized (this.surfaceHolder) {
doDraw(c);
}
} finally {
if (c != null) {
this.surfaceHolder.unlockCanvasAndPost(c);
Log.i("Live Test","Unlocked And Posted");
}
}
// pause if no need to animate
synchronized (this) {
if (wait) {
try {
wait();
} catch (Exception e) {
Log.i("Live Test","Error wait");
}
}
}
}
}
}
public void setSurfaceSize(int width, int height) {
this.width = width;
this.height = height;
synchronized(this) {
this.notify();
}
}
/**
* Pauses the livewallpaper animation
*/
public void pausePainting() {
this.wait = true;
synchronized(this) {
this.notify();
}
}
/**
* Resume the livewallpaper animation
*/
public void resumePainting() {
this.wait = false;
synchronized(this) {
this.notify();
}
}
/**
* Stop the livewallpaper animation
*/
public void stopPainting() {
this.run = false;
synchronized(this) {
this.notify();
}
}
private void doDraw(Canvas canvas) {
if(first)
{
canvas.save();
canvas.drawColor(0x60444444);
canvas.drawBitmap(hole, 80,80,null);
canvas.restore();
first = false;
}
else
{
long currentTime = System.currentTimeMillis();
long elapsed = currentTime - previousTime;
if (elapsed > 20) {
canvas.save();
degree+= 5;
if(degree>359)degree = degree -358;
canvas.rotate((float) degree);
canvas.restore();
Log.i("Live Test","rotated");
}
previousTime = currentTime;
}
}
}
So I am trying to rotate the bitmap and show it again so it looks like its sucking stars and all.
Also I have removed Basic onPause onResume Functions so that you guys can understand the code easily. I know there is something basic I am missing, but What?
Hmmm. It would take more time than I have right now to puzzle this out, but I do have one suggestion: code your wallpaper using a simpler approach. Scrap the thread and do something more along the lines of the Cube example, that is to say make a runnable which schedules calls to itself using postDelayed. Hope this helps. George.

Categories

Resources