-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash in AudioStreamAAudio::read #1995
Comments
I notice from the crash dump that the output callback is using Legacy mode and the input read() is using MMAP. There could also be timing differences that cause race conditions. Are you using a shared_ptr for your stream variables? Line 190 in 89427ed
That can help prevent some use-after-free bugs. |
Do you think this could be happening when the user plugs in or unplugs a headset? |
It is hard to tell if there is any headset connection change just with the stack trace. We used to have an old bug b/220061515 reporting a similar crash with sound amplifier on android S. It is closed due to no longer reproduced. That may be the same root cause while the issue is not really fixed. I will take a look. |
Looking more into the code, it looks like this could be caused by the audio stream is freed when the data callback is fired. In aaudio, the audio stream is held by the clients. @sonicdebris, could you please share if you are using a shared pointer for your stream variables as Phil mentioned above? It will also be good to know if you will free the stream immediately when receiving disconnect error. |
Hello, sorry for the delay in the response. We are using shared pointers for the streams, and using We handle stream errors by implementing We protect all accesses to the stream with a mutex, and in the audio callback we do a try-lock, followed by a null-check on the stream pointer, so the audio callback will just do nothing in case some other function is messing with the stream or has reset the pointer. I might add that we have the same issue reported on crashlytics, where we log when |
@sonicdebris - you wrote:
Do you return true from your onError() call? If you return false then Oboe might close the stream again! |
Yes, we do return true |
The reason you see more crashes on MTK6833 may be because Mediatek may have set compiler options that are detecting numeric overflows. Or there may be lower memory on those devices. |
I notice that you are using Oboe v1.7.5. Please try using the latest version v1.9.0 from GitHub. |
Thanks for the heads-up! I do not see a tag/release for 1.9.0 but I guess I can just reference this commit: |
V1.9.0 is released now. |
So the crashes just happen at odd times unrelated to peripheral disconnects. Hmmm. Is there any chance that you are storing a pointer to C++ object as a long data value in a Java Object? |
I recommend statically allocating the object that contains your audio engine. I will close this if I do not hear back. |
We deployed a client build with oboe 1.9.0 but the rate for the crash hasn't been affected.
We are doing something similar, but not directly. The oboe stream objects are part of an "AudioIO" class, which is in turn the native peer of a java object. We use djinni to have JNI bindings automatically generated, but in principle it works in such way that the "native peer long" field is the address of a shared pointer. When the java object is gone (which happens based on a mechanism relying on phantom references), the native peer's shared pointer is released and eventually the destructor for the "AudioIO" C++ class is called. There, we throw an assert if the IO had not already been stopped explicitly (we do try to enforce that IO is started/stopped precisely when needed), but on prod builds after that we still perform the streams stop and destruction (under protection by the mutex mentioned before). I'll investigate more in this direction, maybe there is something weird happening in the destructor, thanks for the suggestions! |
Android version(s): 9 to 14
Android device(s): several (see below)
Oboe version: 1.7.5
App name used for testing: play store reports
We have been getting reports of this crash for a (long) while from google play dev console:
I think it's some assert in libaaudio. We tried to investigate our code, make it more robust but we weren't able to affect the rate of this particular crash.
In short, we are reading from the input stream using the syncrhonous API while in the output stream callback.
Interestingly, the distribution of devices with the most occurrences doesn't match the one for the app install base. Here are the top 10 devices (accounting for ~35% of the crashes):
More interestingly, the Mediatek MT6833 is by far the most represented in the whole list (a couple hundred devices).
Hopefully someone here has some hunch on what we might be doing wrong, or a workaround for some known device-specific issue.
The text was updated successfully, but these errors were encountered: