java.lang.InternalError Thread starting during runtime shutdown - android

We are not able to get the exact cause of this exception. Does anybody have an idea, why this exception occurs in android application? Thanks in advance.
Here is the full stack trace of Exception :
Fatal Exception: java.lang.InternalError: Thread starting during runtime shutdown
at java.lang.Thread.nativeCreate(Thread.java)
at java.lang.Thread.start(Thread.java:1063)
at org.apache.http.impl.conn.tsccm.AbstractConnPool.enableConnectionGC(AbstractConnPool.java:145)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager.createConnectionPool(ThreadSafeClientConnManager.java:125)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager.<init>(ThreadSafeClientConnManager.java:103)
at org.acra.util.HttpRequest.getHttpClient(HttpRequest.java:214)
at org.acra.util.HttpRequest.send(HttpRequest.java:141)
at org.acra.sender.HttpSender.send(HttpSender.java:225)
at org.acra.SendWorker.sendCrashReport(SendWorker.java:179)
at org.acra.SendWorker.checkAndSendReports(SendWorker.java:141)
at org.acra.SendWorker.run(SendWorker.java:77)

void Thread::CreateNativeThread(JNIEnv* env, jobject java_peer, size_t stack_size, bool is_daemon) {
CHECK(java_peer != nullptr);
Thread* self = static_cast<JNIEnvExt*>(env)->self;
if (VLOG_IS_ON(threads)) {
ScopedObjectAccess soa(env);
ArtField* f = soa.DecodeField(WellKnownClasses::java_lang_Thread_name);
mirror::String* java_name = reinterpret_cast<mirror::String*>(f->GetObject(
soa.Decode<mirror::Object*>(java_peer)));
std::string thread_name;
if (java_name != nullptr) {
thread_name = java_name->ToModifiedUtf8();
} else {
thread_name = "(Unnamed)";
}
VLOG(threads) << "Creating native thread for " << thread_name;
self->Dump(LOG(INFO));
}
Runtime* runtime = Runtime::Current();
// Atomically start the birth of the thread ensuring the runtime isn't shutting down.
bool thread_start_during_shutdown = false;
{
MutexLock mu(self, *Locks::runtime_shutdown_lock_);
if (runtime->IsShuttingDownLocked()) {
thread_start_during_shutdown = true;
} else {
runtime->StartThreadBirth();
}
}
if (thread_start_during_shutdown) {//At there!!!
ScopedLocalRef<jclass> error_class(env, env->FindClass("java/lang/InternalError"));
env->ThrowNew(error_class.get(), "Thread starting during runtime shutdown");
return;
}
Thread* child_thread = new Thread(is_daemon);
// Use global JNI ref to hold peer live while child thread starts.
child_thread->tlsPtr_.jpeer = env->NewGlobalRef(java_peer);
stack_size = FixStackSize(stack_size);
// Thread.start is synchronized, so we know that nativePeer is 0, and know that we're not racing to
// assign it.
env->SetLongField(java_peer, WellKnownClasses::java_lang_Thread_nativePeer,
reinterpret_cast<jlong>(child_thread));
// Try to allocate a JNIEnvExt for the thread. We do this here as we might be out of memory and
// do not have a good way to report this on the child's side.
std::unique_ptr<JNIEnvExt> child_jni_env_ext(
JNIEnvExt::Create(child_thread, Runtime::Current()->GetJavaVM()));
int pthread_create_result = 0;
if (child_jni_env_ext.get() != nullptr) {
pthread_t new_pthread;
pthread_attr_t attr;
child_thread->tlsPtr_.tmp_jni_env = child_jni_env_ext.get();
CHECK_PTHREAD_CALL(pthread_attr_init, (&attr), "new thread");
CHECK_PTHREAD_CALL(pthread_attr_setdetachstate, (&attr, PTHREAD_CREATE_DETACHED),
"PTHREAD_CREATE_DETACHED");
CHECK_PTHREAD_CALL(pthread_attr_setstacksize, (&attr, stack_size), stack_size);
pthread_create_result = pthread_create(&new_pthread,
&attr,
Thread::CreateCallback,
child_thread);
CHECK_PTHREAD_CALL(pthread_attr_destroy, (&attr), "new thread");
if (pthread_create_result == 0) {
// pthread_create started the new thread. The child is now responsible for managing the
// JNIEnvExt we created.
// Note: we can't check for tmp_jni_env == nullptr, as that would require synchronization
// between the threads.
child_jni_env_ext.release();
return;
}
}
// Either JNIEnvExt::Create or pthread_create(3) failed, so clean up.
{
MutexLock mu(self, *Locks::runtime_shutdown_lock_);
runtime->EndThreadBirth();
}
// Manually delete the global reference since Thread::Init will not have been run.
env->DeleteGlobalRef(child_thread->tlsPtr_.jpeer);
child_thread->tlsPtr_.jpeer = nullptr;
delete child_thread;
child_thread = nullptr;
// TODO: remove from thread group?
env->SetLongField(java_peer, WellKnownClasses::java_lang_Thread_nativePeer, 0);
{
std::string msg(child_jni_env_ext.get() == nullptr ?
"Could not allocate JNI Env" :
StringPrintf("pthread_create (%s stack) failed: %s",
PrettySize(stack_size).c_str(), strerror(pthread_create_result)));
ScopedObjectAccess soa(env);
soa.Self()->ThrowOutOfMemoryError(msg.c_str());
}
}

Related

Android: Exception when calling a static java method from cpp thread

When my java class loads, I call this jni method to set "listener" from cpp to java (I record audio using cpp and want to pass its bytes to java) :
MyJava.class
setListener(JNIEnv *env, jclass thiz);
myCpp.cpp
setListener(JNIEnv *env, jclass thiz) {
envMyClass = env;
classMyClass = thiz;
// I read that I need these 2 lines in order to connect the java thread to the cpp thread
env->GetJavaVM(&javavm);
GetJniEnv(javavm, &envCamera);
return 0;
}
bool GetJniEnv(JavaVM *vm, JNIEnv **env) {
bool did_attach_thread = false;
*env = nullptr;
// Check if the current thread is attached to the VM
auto get_env_result = vm->GetEnv((void**)env, JNI_VERSION_1_6);
if (get_env_result == JNI_EDETACHED) {
if (vm->AttachCurrentThread(env, NULL) == JNI_OK) {
did_attach_thread = true;
} else {
// Failed to attach thread. Throw an exception if you want to.
}
} else if (get_env_result == JNI_EVERSION) {
// Unsupported JNI version. Throw an exception if you want to.
}
return did_attach_thread;
}
and the in myCpp.cpp thread I'm trying to call:
if (envMyClass != nullptr && classMyClass != nullptr && javavm != nullptr) {
LOGD("000");
jmethodID javaMethod = envMyClass->GetStaticMethodID(classMyClass, "myJavaFunction", "()V");
LOGD("001");
envMyClass->CallStaticVoidMethod(classMyClass, javaMethod);
}
and it crashes on the line of "jmethodId javaMethod.."
myJavaFunction is a method in MyJava class:
public static void myJavaFunction() {
Log.d("my_log", "jni callback");
}
the crash:
Abort message: 'JNI DETECTED ERROR IN APPLICATION: a thread (tid 12346 is making JNI calls without being attached
in call to GetStaticMethodID'
Any idea how to fix it?
I've noticed a few things:
In setListener, I assume you are assigning jclass thiz to a global variable classMyClass. To do so, you should use classMyClass = env->NewGlobalRef(thiz), otherwise the class reference will not be valid (reference).
You are calling GetJniEnv but you do not check its result and therefore you do not actually know whether the current thread has attached successfully.
I assume you are attaching the current thread in setListener, save the env to a global variable, and then use that variable at another point in your code. Could it be that you are switching thread context somewhere in between? You could do the following to make sure the current thread is really attached:
JNIEnv *thisEnv;
int getEnvStat = jvm->GetEnv((void **)&thisEnv, JNI_VERSION_1_6 /* or whatever your JNI version is*/);
if (getEnvStat == JNI_EDETACHED)
{
if (jvm->AttachCurrentThread(&thisEnv, NULL) != 0)
{
// throw error
}
else
{
jmethodID javaMethod = thisEnv->GetStaticMethodID(classMyClass, "myJavaFunction", "()V");
thisEnv->CallStaticVoidMethod(classMyClass, javaMethod);
}
}
else if (getEnvStat == JNI_OK)
{
jmethodID javaMethod = thisEnv->GetStaticMethodID(classMyClass, "myJavaFunction", "()V");
thisEnv->CallStaticVoidMethod(classMyClass, javaMethod);
}
else if (getEnvStat == JNI_EVERSION)
{
// throw error
}
}
You can make this more efficient if you call jmethodID javaMethod = thisEnv->GetStaticMethodID(classMyClass, "myJavaFunction", "()V"); only once initially and store the jmethodID in a global variable. jmethodIDs are valid across envs.

I am a novice JNI,Why doesn't my android jni C ++ try block catch an exception

I am a novice JNI,Why doesn't my android jni C ++ try block catch an exception,The code crashes when and the app crashes without jumping to exception handling
this is my code
Activity re code
Receive H264 data with queue at JAVA layer
Start a decoding thread and continuously take a frame of H264 data packets from the queue into the C ++ layer to decode with FFmpeg
class decode extends Thread {
#Override
public void run() {
// super.run();
while (isDecode) {
byte[] data = one.poll();
if (data != null)
if (ffmpegUtilsInSignalOne != null)
ffmpegUtilsInSignalOne.decodeH264One(data);
}
}
}
cpp File code
extern "C"
JNIEXPORT void JNICALL
Java_cn_zhihuiyun_control_utils_FFMPEGUtils_decodeH264One(JNIEnv *env, jobject thiz,
jbyteArray data) {
if (packetOne != nullptr && pCodecCtxOne != nullptr && vFrameOne != nullptr) {
if (data == nullptr)
return;;
if (isPlay == 0)
return;
jbyte *arr = env->GetByteArrayElements(data, JNI_FALSE);
packetOne->data = (uint8_t *) arr;
packetOne->size = env->GetArrayLength(data);
avcodec_send_packet(pCodecCtxOne, packetOne);
avcodec_receive_frame(pCodecCtxOne, vFrameOne);
av_packet_unref(packetOne);
env->ReleaseByteArrayElements(data, arr, 0);
}
}
At this time, a display thread is started in the C layer for screen rendering
extern "C"
JNIEXPORT void JNICALL
Java_cn_zhihuiyun_control_utils_FFMPEGUtils_initVisThread(JNIEnv *env, jobject thiz) {
isPlay = 1;
threadVister = pthread_create(&threadVister, nullptr,
reinterpret_cast<void *(*)(void *)>(&showVideo),
(void *) env);
pthread_detach(threadVister);
}
void *showVideo(JNIEnv *env) {
try {
while (isPlay == 1) {
if (vFrameOne != nullptr && pFrameRGBAOne != nullptr && pCodecCtxOne != nullptr &&
nativeWindowOne != nullptr) {
ANativeWindow_lock(nativeWindowOne, &windowBufferOne, nullptr);
av_image_fill_arrays(pFrameRGBAOne->data, pFrameRGBAOne->linesize,
(const uint8_t *) windowBufferOne.bits, AV_PIX_FMT_RGBA,
detWidth, detHeight, 1);
int re = -1;
try {
re = libyuv::I420ToARGB(vFrameOne->data[0], vFrameOne->linesize[0],
vFrameOne->data[2], vFrameOne->linesize[2],
vFrameOne->data[1], vFrameOne->linesize[1],
pFrameRGBAOne->data[0], pFrameRGBAOne->linesize[0],
pCodecCtxOne->width, pCodecCtxOne->height);
if (env->ExceptionCheck()) {
env->ExceptionDescribe();
env->ExceptionClear();
}
} catch (...) {
}
if (re == -1) {
} else {
ANativeWindow_unlockAndPost(nativeWindowOne);
}
}
}
}
} catch (...) {
printf("error");
}
if (vFrameOne != nullptr) {
av_frame_free(&vFrameOne);
vFrameOne = nullptr;
}
if (packetOne != nullptr) {
av_free(packetOne);
packetOne = nullptr;
}
if (pFrameRGBAOne != nullptr) {
av_free(pFrameRGBAOne);
pFrameRGBAOne = nullptr;
}
if (pCodecCtxOne != nullptr) {
avcodec_free_context(&pCodecCtxOne);
pCodecCtxOne = nullptr;
}
if (nativeWindowOne != nullptr) {
ANativeWindow_release(nativeWindowOne);
nativeWindowOne = nullptr;
}
pthread_exit(&threadVister);
}
You should carefully read the documentation of avcodec_receive_frame:
Return decoded output data from a decoder.
Parameters
avctx codec context
frame This will be set to a reference-counted video or audio frame (depending on the decoder type) allocated by the decoder. Note that the function will always call av_frame_unref(frame) before doing anything else.
Returns
0: success, a frame was returned
AVERROR(EAGAIN): output is not available in this state - user must try to send new input
AVERROR_EOF: the decoder has been fully flushed, and there will be no more output frames
AVERROR(EINVAL): codec not opened, or it is an encoder other negative values: legitimate decoding errors
I have highlighted two key pieces of information:
First, the call to avcodec_receive_frame will invalidate vFrameOne. If your other thread is in the middle of decoding, your program will crash. You will need to establish a synchronization mechanism between receiving and displaying threads to make sure the receiver is always receiving into a frame only it has access to, and that it only passes full frames to the displaying side. (see next point)
Second, you should check the return value of avcodec_receive_frame. If you see an AVERROR(EAGAIN) you have not received enough packets to produce a full frame. Only if this function produces 0 can you take the full frame and hand it over to the displaying thread.

What is the best way to save JNIEnv*

I have an Android project with JNI. In the CPP file which implements a listener class, there is a callback x() . When x() function is called, I want to call another function in a java class. However, in order to invoke that java function, I need to access JNIEnv*.
I know that in the same cpp file of the callback, there is a function:
static jboolean init (JNIEnv* env, jobject obj) {...}
Should I save in the cpp file JNIEnv* as member variable when init(..) is called? and use it later when the callback happens?
Sorry but I am a beginner in JNI.
Caching a JNIEnv* is not a particularly good idea, since you can't use the same JNIEnv* across multiple threads, and might not even be able to use it for multiple native calls on the same thread (see http://android-developers.blogspot.se/2011/11/jni-local-reference-changes-in-ics.html)
Writing a function that gets the JNIEnv* and attaches the current thread to the VM if necessary isn't too difficult:
bool GetJniEnv(JavaVM *vm, JNIEnv **env) {
bool did_attach_thread = false;
*env = nullptr;
// Check if the current thread is attached to the VM
auto get_env_result = vm->GetEnv((void**)env, JNI_VERSION_1_6);
if (get_env_result == JNI_EDETACHED) {
if (vm->AttachCurrentThread(env, NULL) == JNI_OK) {
did_attach_thread = true;
} else {
// Failed to attach thread. Throw an exception if you want to.
}
} else if (get_env_result == JNI_EVERSION) {
// Unsupported JNI version. Throw an exception if you want to.
}
return did_attach_thread;
}
The way you'd use it is:
JNIEnv *env;
bool did_attach = GetJniEnv(vm, &env);
// Use env...
// ...
if (did_attach) {
vm->DetachCurrentThread();
}
You could wrap this in a class that attaches upon construction and detaches upon destruction, RAII-style:
class ScopedEnv {
public:
ScopedEnv() : attached_to_vm_(false) {
attached_to_vm_ = GetJniEnv(g_vm, &env_); // g_vm is a global
}
ScopedEnv(const ScopedEnv&) = delete;
ScopedEnv& operator=(const ScopedEnv&) = delete;
virtual ~ScopedEnv() {
if (attached_to_vm_) {
g_vm->DetachCurrentThread();
attached_to_vm_ = false;
}
}
JNIEnv *GetEnv() const { return env_; }
private:
bool attached_to_env_;
JNIEnv *env_;
};
// Usage:
{
ScopedEnv scoped_env;
scoped_env.GetEnv()->SomeJniFunction();
}
// scoped_env falls out of scope, the thread is automatically detached if necessary
Edit: Sometimes you might have a long-ish running native thread that will need a JNIEnv* on multiple occasions. In such situations you may want to avoid constantly attaching and detaching the thread to/from the JVM, but you still need to make sure that you detach the thread upon thread destruction.
You can accomplish this by attaching the thread only once and then leaving it attached, and by setting up a thread destruction callback using pthread_key_create and pthread_setspecific that will take care of calling DetachCurrentThread.
/**
* Get a JNIEnv* valid for this thread, regardless of whether
* we're on a native thread or a Java thread.
* If the calling thread is not currently attached to the JVM
* it will be attached, and then automatically detached when the
* thread is destroyed.
*/
JNIEnv *GetJniEnv() {
JNIEnv *env = nullptr;
// We still call GetEnv first to detect if the thread already
// is attached. This is done to avoid setting up a DetachCurrentThread
// call on a Java thread.
// g_vm is a global.
auto get_env_result = g_vm->GetEnv((void**)&env, JNI_VERSION_1_6);
if (get_env_result == JNI_EDETACHED) {
if (g_vm->AttachCurrentThread(&env, NULL) == JNI_OK) {
DeferThreadDetach(env);
} else {
// Failed to attach thread. Throw an exception if you want to.
}
} else if (get_env_result == JNI_EVERSION) {
// Unsupported JNI version. Throw an exception if you want to.
}
return env;
}
void DeferThreadDetach(JNIEnv *env) {
static pthread_key_t thread_key;
// Set up a Thread Specific Data key, and a callback that
// will be executed when a thread is destroyed.
// This is only done once, across all threads, and the value
// associated with the key for any given thread will initially
// be NULL.
static auto run_once = [] {
const auto err = pthread_key_create(&thread_key, [] (void *ts_env) {
if (ts_env) {
g_vm->DetachCurrentThread();
}
});
if (err) {
// Failed to create TSD key. Throw an exception if you want to.
}
return 0;
}();
// For the callback to actually be executed when a thread exits
// we need to associate a non-NULL value with the key on that thread.
// We can use the JNIEnv* as that value.
const auto ts_env = pthread_getspecific(thread_key);
if (!ts_env) {
if (pthread_setspecific(thread_key, env)) {
// Failed to set thread-specific value for key. Throw an exception if you want to.
}
}
}
If __cxa_thread_atexit is available to you, you might be able to accomplish the same thing with some thread_local object that calls DetachCurrentThread in its destructor.
#Michael, gives a good overview of how best to retrieve the JNI by caching the JVM.
For those that dont want to use pthread (or cant' because you are on Windows system), and you are using c++ 11 or highter, then thread_local storage is the way to go.
Bellow is rough example on how to implement a wrapper method that properly attaches to a thread and automatically cleans-up when the thread exits
JNIEnv* JNIThreadHelper::GetJniEnv() {
// This method might have been called from a different thread than the one that created
// this handler. Check to make sure that the JNI is attached and if not attach it to the
// new thread.
// double check it's all ok
int nEnvStat = m_pJvm->GetEnv(reinterpret_cast<void**>(&m_pJniEnv), JNI_VERSION_1_6);
if (nEnvStat == JNI_EDETACHED) {
std::cout << "GetEnv: not attached. Attempting to attach" << std::endl;
JavaVMAttachArgs args;
args.version = JNI_VERSION_1_6; // choose your JNI version
args.name = NULL; // you might want to give the java thread a name
args.group = NULL; // you might want to assign the java thread to a ThreadGroup
if (m_pJvm->AttachCurrentThread(&m_pJniEnv, &args) != 0) {
std::cout << "Failed to attach" << std::endl;
return nullptr;
}
thread_local struct DetachJniOnExit {
~DetachJniOnExit() {
m_pJvm->DetachCurrentThread();
}
};
m_bIsAttachedOnAThread = true;
}
else if (nEnvStat == JNI_OK) {
//
}
else if (nEnvStat == JNI_EVERSION) {
std::cout << "GetEnv: version not supported" << std::endl;
return nullptr;
}
return m_pJniEnv;
}

SDL2 stucks in SDL_RenderClear()

I have two files, one is main file (main.cpp) other one is for multi-threading (threads.cpp).
I use SDL_PushEvent() in threads.cpp and SDL_PollEvent() in main.cpp.
Below is a logic of my sample code.
main.cpp
bool Init() {
if (SDL_Init(SDL_INIT_VIDEO) < 0)
return false;
SDL_DisplayMode mode;
SDL_GetDisplayMode(0, 0, &mode);
this->win_width = mode.w;
this->win_height = mode.h;
this->win = SDL_CreateWindow(NULL, 0, 0, win_width, win_height, SDL_WINDOW_SHOWN | SDL_WINDOW_FULLSCREEN | SDL_WINDOW_OPENGL);
if (this->win == NULL) {
LOGE("[Init] SDL Window Created failed : %s", SDL_GetError());
return false;
}
this->renderer = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED);
if (this->renderer == NULL) {
LOGE("[Init] SDL Renderer Created failed : %s", SDL_GetError());
return false;
}
this->bmp = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGB565, SDL_TEXTUREACCESS_STREAMING, win_width, win_height);
if (this->bmp == NULL) {
LOGE("[Init] SDL Texture Created failed : %s", SDL_GetError());
return false;
}
return true;
}
void DisplayEvent (SDL_Event e) {
FrameObject obj = *(FrameObject*) e.user.data1;
SDL_Rect rect;
rect.x = rect.y = 0;
rect.w = obj.frameWidth;
rect.h = obj.frameHeight;
int r = SDL_UpdateTexture(this->bmp, NULL, obj.FrameData.RGB, rect.w*2);
LOGI("[DisplayEvent] - UpdateTexture");
// Reneder this Frame
SDL_RenderClear(this->renderer);
LOGI("[DisplayEvent] - RenderClear");
SDL_RenderCopy(this->renderer, this->bmp, NULL, &rect);
LOGI("[DisplayEvent] - RenderCopy");
SDL_RenderPresent(this->renderer);
LOGI("[DisplayEvent] - RenderPresent");
}
int main (int argc, char **argv) {
Init();
while (!quit) {
SDL_Event e;
// Event Polling
while (SDL_PollEvent(&e)) {
switch (e.type) {
case MY_EVENT:
LOGI("[main] - Get MY_EVENT");
DisplayEvent(e);
LOGI("[main] - %s more MY_EVENT", SDL_HasEvent(MY_EVENT) ? "Has" : "Hasn't");
break;
default:
break;
}
}
}
SDL_Quit();
}
threads.cpp
void* push_event(void *arg) {
FrameObject obj = (FrameObject*) arg;
while (!quit) {
SDL_Event event;
SDL_zero(event);
event.type = MY_EVENT;
event.user.data1 = obj;
event.user.data2 = 0;
if (SDL_PushEvent(&event) == 1) LOGI("[push_event] - Push MY_EVENT");
else LOGE("[push_event] - Event Push Error : %s", SDL_GetError());
sleep(1);
}
}
EDIT:
I add more sample code. I found the problem is not missing the SDL Event. The problem is SDL thread (main Thread) is blocked at SDL_RenderClear().
The log message output "[DisplayEvent] - UpdateTexture" but not print "[DisplayEvent] - RenderClear". It's weird. For create single thread to run push_event is find, but when I create two threads to run push_event, the SDL Thread is blocked.
Dost it problem is hardware i.e. GPU?
SDL Wiki on SDL_PushEvent says that it is thread-safe so I'm assuming it is Ok to call it from other threads. However, it says there that you need first to call SDL_RegisterEvents to get an event ID suitable for using as application specific events, and then use it to push your events.
This is not clear from your code, are you calling SDL_RegisterEvents? This might be the problem. Also, have you checked if it works if you push the event on the same thread you initialized the video? This test will make sure the problem isn't related to threads.

CreateProcess for android adb start-server?

When I use CreateProcess to create process adb.exe, It will Block in ReadFile.
void KillAdbProcess()
{
DWORD aProcesses[1024], cbNeeded, cProcesses;
unsigned int i;
if ( !EnumProcesses( aProcesses, sizeof(aProcesses), &cbNeeded ) )
return;
cProcesses = cbNeeded / sizeof(DWORD);
for ( i = 0; i < cProcesses; i++ )
if( aProcesses[i] != 0 ){
bool shouldKill =false;
wchar_t szProcessName[MAX_PATH] = L"<unknown>";
//Get a handle to the process.
HANDLE hProcess = OpenProcess( PROCESS_QUERY_INFORMATION |
PROCESS_VM_READ | PROCESS_TERMINATE,
FALSE, aProcesses[i] );
if (NULL != hProcess )
{
HMODULE hMod;
DWORD cbNeeded;
if ( EnumProcessModules( hProcess, &hMod, sizeof(hMod),
&cbNeeded) )
{
GetModuleFileNameExW( hProcess, hMod, szProcessName,
sizeof(szProcessName)/sizeof(TCHAR));
int len = wcslen(szProcessName);
if(!wcscmp(L"\\adb.exe",szProcessName+len-8)){
shouldKill = true;
}
}
}
if(shouldKill) TerminateProcess(hProcess,0);
CloseHandle( hProcess );
}
}
int testadb(){
KillAdbProcess();
char buff[4096] = {0};
int len = sizeof(buff);
DWORD exitCode = 0;
SECURITY_ATTRIBUTES sa;
ZeroMemory(&sa, sizeof(sa));
sa.bInheritHandle = TRUE;
sa.lpSecurityDescriptor = NULL;
sa.nLength = sizeof(sa);
HANDLE hOutputReadTmp,hOutputRead,hOutputWrite;
// Create the child output pipe.
if (!CreatePipe(&hOutputReadTmp,&hOutputWrite,&sa,0))
return false;
// Create new output read handle and the input write handles. Set
// the Properties to FALSE. Otherwise, the child inherits the
// properties and, as a result, non-closeable handles to the pipes
// are created.
if (!DuplicateHandle(GetCurrentProcess(),hOutputReadTmp,
GetCurrentProcess(),
&hOutputRead, // Address of new handle.
0,FALSE, // Make it uninheritable.
DUPLICATE_SAME_ACCESS))
return false;
// Close inheritable copies of the handles you do not want to be
// inherited.
if (!CloseHandle(hOutputReadTmp)) return false;
PROCESS_INFORMATION pi;
ZeroMemory(&pi, sizeof(pi));
STARTUPINFOW si;
GetStartupInfoW(&si);
si.cb = sizeof(STARTUPINFO);
si.dwFlags = STARTF_USESTDHANDLES;
si.wShowWindow = SW_HIDE;
si.hStdInput = NULL;
if(buff) {
si.hStdOutput = hOutputWrite;
si.hStdError = hOutputWrite;
} else {
si.hStdOutput = NULL;
si.hStdError = NULL;
}
wchar_t cmdBuf[512] = L"adb.exe start-server";
if( !::CreateProcessW(NULL, cmdBuf, NULL, NULL, TRUE, DETACHED_PROCESS, NULL, NULL, &si, &pi) )
{
exitCode = -1;
goto exit;
}
::CloseHandle(hOutputWrite);
hOutputWrite = NULL;
len--; //keep it for string end char.
DWORD dwBytes = 0;
DWORD dwHasRead = 0;
while(::ReadFile(hOutputRead, buff+dwHasRead, len-dwHasRead, &dwBytes, NULL))
{
printf("read byte=%d\n",dwBytes);
if(0 == dwBytes) break;
dwHasRead += dwBytes;
//GetExitCodeProcess(pi.hProcess, &exitCode);
//if(STILL_ACTIVE != exitCode) break;
if(dwHasRead >= len) break;
}
buff[dwHasRead] = 0;
::GetExitCodeProcess(pi.hProcess, &exitCode);
exit:
if(hOutputRead) ::CloseHandle(hOutputRead);
if(hOutputWrite) ::CloseHandle(hOutputWrite);
::CloseHandle(pi.hProcess);
::CloseHandle(pi.hThread);
return 0;
}
If I change code to
while(::ReadFile(hOutputRead, buff+dwHasRead, len-dwHasRead, &dwBytes, NULL))
{
printf("read byte=%d\n",dwBytes);
if(0 == dwBytes) break;
dwHasRead += dwBytes;
GetExitCodeProcess(pi.hProcess, &exitCode);
if(STILL_ACTIVE != exitCode) break;
if(dwHasRead >= len) break;
}
it works, but when I delete printf code, it will block again.
while(::ReadFile(hOutputRead, buff+dwHasRead, len-dwHasRead, &dwBytes, NULL))
{
if(0 == dwBytes) break;
dwHasRead += dwBytes;
GetExitCodeProcess(pi.hProcess, &exitCode);
if(STILL_ACTIVE != exitCode) break;
if(dwHasRead >= len) break;
}
In the code of adb.exe, I see some code like belows:
#if ADB_HOST
int launch_server()
{
#ifdef HAVE_WIN32_PROC
/* we need to start the server in the background */
/* we create a PIPE that will be used to wait for the server's "OK" */
/* message since the pipe handles must be inheritable, we use a */
/* security attribute */
HANDLE pipe_read, pipe_write;
SECURITY_ATTRIBUTES sa;
STARTUPINFO startup;
PROCESS_INFORMATION pinfo;
char program_path[ MAX_PATH ];
int ret;
sa.nLength = sizeof(sa);
sa.lpSecurityDescriptor = NULL;
sa.bInheritHandle = TRUE;
/* create pipe, and ensure its read handle isn't inheritable */
ret = CreatePipe( &pipe_read, &pipe_write, &sa, 0 );
if (!ret) {
fprintf(stderr, "CreatePipe() failure, error %ld\n", GetLastError() );
return -1;
}
SetHandleInformation( pipe_read, HANDLE_FLAG_INHERIT, 0 );
ZeroMemory( &startup, sizeof(startup) );
startup.cb = sizeof(startup);
startup.hStdInput = GetStdHandle( STD_INPUT_HANDLE );
startup.hStdOutput = pipe_write;
startup.hStdError = GetStdHandle( STD_ERROR_HANDLE );
startup.dwFlags = STARTF_USESTDHANDLES;
ZeroMemory( &pinfo, sizeof(pinfo) );
/* get path of current program */
GetModuleFileName( NULL, program_path, sizeof(program_path) );
ret = CreateProcess(
program_path, /* program path */
"adb fork-server server",
/* the fork-server argument will set the
debug = 2 in the child */
NULL, /* process handle is not inheritable */
NULL, /* thread handle is not inheritable */
TRUE, /* yes, inherit some handles */
DETACHED_PROCESS, /* the new process doesn't have a console */
NULL, /* use parent's environment block */
NULL, /* use parent's starting directory */
&startup, /* startup info, i.e. std handles */
&pinfo );
CloseHandle( pipe_write );
if (!ret) {
fprintf(stderr, "CreateProcess failure, error %ld\n", GetLastError() );
CloseHandle( pipe_read );
return -1;
}
CloseHandle( pinfo.hProcess );
CloseHandle( pinfo.hThread );
/* wait for the "OK\n" message */
{
char temp[3];
DWORD count;
ret = ReadFile( pipe_read, temp, 3, &count, NULL );
CloseHandle( pipe_read );
if ( !ret ) {
fprintf(stderr, "could not read ok from ADB Server, error = %ld\n", GetLastError() );
return -1;
}
if (count != 3 || temp[0] != 'O' || temp[1] != 'K' || temp[2] != '\n') {
fprintf(stderr, "ADB server didn't ACK\n" );
return -1;
}
}
#elif defined(HAVE_FORKEXEC)
char path[PATH_MAX];
int fd[2];
// set up a pipe so the child can tell us when it is ready.
// fd[0] will be parent's end, and fd[1] will get mapped to stderr in the child.
if (pipe(fd)) {
fprintf(stderr, "pipe failed in launch_server, errno: %d\n", errno);
return -1;
}
get_my_path(path);
pid_t pid = fork();
if(pid < 0) return -1;
if (pid == 0) {
// child side of the fork
// redirect stderr to the pipe
// we use stderr instead of stdout due to stdout's buffering behavior.
adb_close(fd[0]);
dup2(fd[1], STDERR_FILENO);
adb_close(fd[1]);
// child process
int result = execl(path, "adb", "fork-server", "server", NULL);
// this should not return
fprintf(stderr, "OOPS! execl returned %d, errno: %d\n", result, errno);
} else {
// parent side of the fork
char temp[3];
temp[0] = 'A'; temp[1] = 'B'; temp[2] = 'C';
// wait for the "OK\n" message
adb_close(fd[1]);
int ret = adb_read(fd[0], temp, 3);
adb_close(fd[0]);
if (ret < 0) {
fprintf(stderr, "could not read ok from ADB Server, errno = %d\n", errno);
return -1;
}
if (ret != 3 || temp[0] != 'O' || temp[1] != 'K' || temp[2] != '\n') {
fprintf(stderr, "ADB server didn't ACK\n" );
return -1;
}
setsid();
}
#else
#error "cannot implement background server start on this platform"
#endif
return 0;
}
#endif
I think the child process of adb.exe inherit the handle of adb.exe, if the child process of adb.exe doesn't exit, ReadFile will block for ever. But when I exec "adb.exe start-server" in command, all is Ok. So how does windows command call CreateProcess and ReadFile?
I have found the answer: Redirecting an arbitrary Console's Input/Output - CodeProject.
The technique of redirecting the input/output of a console process is very sample: The CreateProcess() API through the STARTUPINFO structure enables us to redirect the standard handles of a child console based process. So we can set these handles to either a pipe handle, file handle, or any handle that we can read and write. The detail of this technique has been described clearly in MSDN: HOWTO: Spawn Console Processes with Redirected Standard Handles.
However, MSDN's sample code has two big problem. First, it assumes the child process will send output at first, then wait for input, then flush the output buffer and exit. If the child process doesn't behave like that, the parent process will be hung up. The reason of this is the ReadFile() function remains blocked untill the child process sends some output, or exits.
Second, It has problem to redirect a 16-bit console (including console based MS-DOS applications.) On Windows 9x, ReadFile remains blocked even after the child process has terminated; On Windows NT/XP, ReadFile always returns FALSE with error code set to ERROR_BROKEN_PIPE if the child process is a DOS application.
Solving the block problem of ReadFile
To prevent the parent process from being blocked by ReadFile, we can simply pass a file handle as stdout to the child process, then monitor this file. A more simple way is to call PeekNamedPipe() function before calling ReadFile(). The PeekNamedPipe function checks information about data in the pipe, then returns immediately. If there's no data available in the pipe, don't call ReadFile.
By calling PeekNamedPipe before ReadFile, we also solve the block problem of redirecting a 16-bit console on Windows 9x.

Categories

Resources