demo player display is too darker - android

I build out the hdrvivid-debug.apk and installed on a mobile phone with Android 12. when playing one HDR Vivid test stream, the display is very darker compared to the default video player on the phone.
the vivid stream' file name is "hdr_vivid_selftest_dmsync_pq.mp4". it is used to do hdrvivid player test, it simply display a white rectanglular on the middle of screen. I can provide this stream if you want.
Thank you!
Houxiang

Update:
You can check whether the device has the HDR Vivid video decoding capability according the following method, docs link:
// Check the support for MediaCodec on the device.
MediaCodecList mcList = new MediaCodecList(MediaCodecList.ALL_CODECS);
MediaCodecInfo[] mcInfos = mcList.getCodecInfos();
for (MediaCodecInfo mci : mcInfos) {
// Filter out the encoder.
if (mci.isEncoder()) {
continue;
}
String[] types = mci.getSupportedTypes();
String typesArr = Arrays.toString(types);
// Filter out the non-HEVC decoder.
if (!typesArr.contains("hevc")) {
continue;
}
for (String type : types) {
// Check whether 10-bit HEVC decoding is supported.
MediaCodecInfo.CodecCapabilities codecCapabilities = mci.getCapabilitiesForType(type);
for (MediaCodecInfo.CodecProfileLevel codecProfileLevel : codecCapabilities.profileLevels) {
if (codecProfileLevel.profile == HEVCProfileMain10 || codecProfileLevel.profile == HEVCProfileMain10HDR10
|| codecProfileLevel.profile == HEVCProfileMain10HDR10Plus) {
// true means supported.
return true;
}
}
}
}
// false means unsupported.
return false;
If the device has the HDR Vivid video decoding capability, you can adjust the brightness of the video by adjusting the brightness bar of the device..
If the HDR capability is not supported, you need to call the setBrightness() interface to set the video brightness, and this interface takes effect only on devices that do not support HDR.
It seems that the HarmonyOS 2.0.0.268 system of Honor V30 Pro may not support HDR Vivid. The sample code invokes the setBrightness interface, the set value takes effect. But, the phone's default player may have done other processing, so it looks like the brightness may not be the same.
If HDR is not supported, you can set setBrightness() to a moderate brightness value.
What interface is used when you use the HDR Vivid capability? Native interface or Java interface?
About the display is too dark, Have you adjusted the output brightness?
https://developer.huawei.com/consumer/en/doc/development/Media-Guides/android-hdr-0000001276893212
If it's convenient, please provide a comparison video and HDR Vivid resource file for us to check.

Related

How to know Android decoder MediaCodec.createDecoderByType(type) is Hardware or software decoder?

Is there a way to find out if the decoder that received using MediaCodec.createDecoderByType(type) is a hardware decoder or a software decoder?
There is no real formal flag for indicating whether a codec is a hardware or software codec. In practice, you can do this, though:
MediaCodec codec = MediaCodec.createDecoderByType(type);
if (codec.getName().startsWith("OMX.google.")) {
// Is a software codec
}
(The MediaCodec.getName() method is available since API level 18. For lower API levels, you instead need to iterate over the entries in MediaCodecList and manually pick the right codec that fits your needs instead.)
Putting it here for anyone it might help. According to code for libstagefright, any codec which starts with OMX.google. or c2.android. or does not start with (OMX. and c2.) are all software codecs.
//static
bool MediaCodecList::isSoftwareCodec(const AString &componentName) {
return componentName.startsWithIgnoreCase("OMX.google.")
|| componentName.startsWithIgnoreCase("c2.android.")
|| (!componentName.startsWithIgnoreCase("OMX.")
&& !componentName.startsWithIgnoreCase("c2."));
}
Source:
https://android.googlesource.com/platform/frameworks/av/+/master/media/libstagefright/MediaCodecList.cpp#320

MediaCodec audio/video muxing issues ond Android

I am transcoding videos based on the example given by Google (https://android.googlesource.com/platform/cts/+/master/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java)
Basically, transocding of MP4 files works, but on some phones I get some weird results. If for example I transcode a video with audio on an HTC One, the code won't give any errors but the file cannot play afterward on the phone. If I have a 10 seconds video it jumps to almost the last second and you only here some crackling noise. If you play the video with VLC the audio track is completely muted.
I did not alter the code in terms of encoding/decoding and the same code gives correct results on a Nexus 5 or MotoX for example.
Anybody having an idea why it might fail on that specific device?
Best regard and thank you,
Florian
I made it work in Android 4.4.2 devices by following changes:
Set AAC profile to AACObjectLC instead of AACObjectHE
private static final int OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectLC;
During creation of output audio format, use sample rate and channel count of input format instead of fixed values
MediaFormat outputAudioFormat = MediaFormat.createAudioFormat(OUTPUT_AUDIO_MIME_TYPE,
inputFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
Put a check just before audio muxing audio track to control presentation timestamps. (To avoid timestampUs X < lastTimestampUs X for Audio track error)
if (audioPresentationTimeUsLast == 0) { // Defined in the begining of method
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
} else {
if (audioPresentationTimeUsLast > audioEncoderOutputBufferInfo.presentationTimeUs) {
audioEncoderOutputBufferInfo.presentationTimeUs = audioPresentationTimeUsLast + 1;
}
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
}
// Write data
if (audioEncoderOutputBufferInfo.size != 0) {
muxer.writeSampleData(outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
}
Hope this helps...
If original CTS tests fail you need to go to device vendors and ask for fixes

Pause media recorder programmatically. Camera.apk from samsung galaxy has `this.mMediaRecorder.pause();` does not work in my code

Now, i have made a library to concatenate 2 videos, using the mp4parser library.
And with this i can pause and resume recording a video (after it records the second video, it appends it to the first one).
Now, my boss told me to do a wrapper, and use this for the phones that do not have hardware support for pausing a video. For phones that have that (Samsung Galaxy S2 and Samsung Galaxy S1 can pause a video recording , with their camera application), i need to do this with no libraries, so it would be fast.
How can I implement this native, if as seen on the media recorder state diagram, http://developer.android.com/reference/android/media/MediaRecorder.html , there is no pause state?
I have decompiled the Camera.apk app from an Samsung Galaxe Ace, and the code has in the CamcorderEngine.class a method like this:
public void doPauseVideoRecordingSync()
{
Log.v("CamcorderEngine", "doPauseVideoRecordingSync");
if (this.mMediaRecorder == null)
{
Log.e("CamcorderEngine", "MediaRecorder is not initialized.");
return;
}
if (!this.mMediaRecorderRecording)
{
Log.e("CamcorderEngine", "Recording is not started yet.");
return;
}
try
{
this.mMediaRecorder.pause();
enableAlertSound();
return;
}
catch (RuntimeException localRuntimeException)
{
Log.e("CamcorderEngine", "Could not pause media recorder. ", localRuntimeException);
enableAlertSound();
}
}
If I try this.mMediaRecorder.pause(); in my code, it does not work, how is this possible, they use the same import (android.media.MediaRecorder). Have they rewritten the whole code at a system level?
Is it possible to take the input stream of the second video (while recording it), and directly append this data to my first video?
for my concatenate method, i use 2 parameters (the 2 videos, which both are FileInputStream), is it possible to take the InputStream from the recording function and pass it as the second parameter?
If I try this.mMediaRecorder.pause();
The MediaRecorder class does not have a pause() function, so this is obvious that there is a custom MediaRecorder class on this specific device. This is not something unusual, as the only thing required from the OEMs is to pass the "android compatability tests" on the device; there is no restriction on adding functionality.
Is it possible to take the input stream of the second video (while
recording it), and directly append this data to my first video?
I am not sure if you can do this, because the video stream is encoded data (codec header, key frames, and so on), and just combining 2 streams into 1 file will not produce a valid video file in my opinion.
Basically what you can do:
get raw data images from camera preview surface (see Camera.setPreviewCallback())
use a android.media.MediaCodec to encode the video
and then use an OutputFilStream to write to the file.
This will give you the flexability you want, as in this case you in you app decide which frames get into encoder, and which do not.
However, it maybe an overkill for your specific project, as well as some performance issues may rise.
PS. Oh, an by the way, try taking a look at the MediaMuxer - maybe it can help you too. developer.android.com/reference/android/media/MediaMuxer.html

How to get CamcorderProfile.videoBitRate for an Android device?

My app uses HLS to stream video from a server, but when I request the HLS stream from the server I need to pass it the max video bitrate the device can handle. In the Android API guides it says that "a device's available video recording profiles can be used as a proxy for media playback capabilities," but when I try to retrieve the videoBitRate for the devices back-facing camera it always comes back as 12Mb/s regardless of the device (Galaxy Nexus, Galaxy Tab Plus 7", Galaxy Tab 8.9), despite the fact that they have 3 different GPUs (PowerVR SGX540, Mali-400 MP, Tegra 250 T20). Here's my code, am I doing something wrong?
CamcorderProfile camcorderProfile = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH);
targetVideoBitRate = camcorderProfile.videoBitRate;
If I try this on the Galaxy Tab Plus:
boolean hasProfile = CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_HIGH);
it returns True, despite the fact that QUALITY_HIGH is for 1080p recording and the specs say it can only record at 720p.
It looks like I've found the answer to my own question.
I didn't read the documentation closely enough, QUALITY_HIGH is not equivalent to 1080p it is simply a way of specifying the highest quality profile the device supports. Therefore, by definition, CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_HIGH ) is always true. I should have written something like this:
public enum mVideoQuality {
FullHD, HD, SD
}
mVideoQuality mMaxVideoQuality;
int mTargetVideoBitRate;
private void initVideoQuality {
if ( CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_1080P ) ) {
mMaxVideoQuality = mVideoQuality.FullHD;
} else if ( CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_720P ) ) {
mMaxVideoQuality = mVideoQuality.HD;
} else {
mMaxVideoQuality = mVideoQuality.SD;
}
CamcorderProfile cProfile = CamcorderProfile.get( CamcorderProfile.QUALITY_HIGH );
mTargetVideoBitRate = cProfile.videoBitRate;
}
Most of my devices are still reporting support for 1080p encoding, which I'm skeptical of, however I ran this code on a Sony Experia Tipo ( my low end test device ) and it reported a max encode quality of 480p with a videoBitRate of 720Kb/s.
As I said, I'm not sure if every device can be trusted, but I have seen a range of video bitrates from 720Kb/s to 17Mb/s and Profile qualities from 480p - 1080p. Hopefully other people will find this information to be useful.

How avoid automatic gain control with AudioRecord?

How can I do audio recordings using android.media.AudioRecord without any smartphone-manufacturer-dependent fancy signal processing like automatic gain control (AGC) and/or equalization, noise suppression, echo cancellation, ... just the pure microphone signal?
Background
MediaRecorder.AudioSource provides nine constants,
DEFAULT and MIC initially being there,
VOICE_UPLINK, VOICE_DOWNLINK, and VOICE_CALL added in API level 4,
CAMCORDER and VOICE_RECOGNITION added in API 7,
VOICE_COMMUNICATION added in API 11,
REMOTE_SUBMIX added in API 19 but not available to third-party applications.
But none of them does a clean job across all smartphones. Rather, I have to find out myself it seems, which device uses which combinations of signal processing blocks for which MediaRecorder.AudioSource constant.
Would be nice to have a tenth constant like PURE_MIC added in API level 20.
But as long as this is not available, what can I do instead?
Short answer is "Nothing".
The AudioSources correspond to various logical audio input devices depending on the accessories that you have connected to the phone and the current use-case, which in turn corresponds to physical devices (primary built-in mic, secondary mic, wired headset mic, etc) with different tunings.
Each such combination of physical device and tuning is trimmed by the OEM to meet both external requirements (e.g. CTS, operator requirements, etc) and internal acoustic requirements set by the OEM itself. This process may cause the introduction of various filters - such as AGC, noise suppression, equalization, etc - into the audio input path at the hardware codec or multimedia DSP level.
While a PURE_MIC source might be useful in for some applications, it's not something that's available today.
On many devices you can control things like microphone gain, and possibly even the filter chain, by using amixer to write to the hardware codec's ALSA controls. However, this would obviously be a very platform-specific approach, and I also suspect that you have to be running as either the root or audio user to be allowed to do this.
Some devices add AGC effect to the sound input tract by default. Therefore, you need to obtain reference to corresponding AudioEffect object and force it to disable.
First, obtain AutomaticGainControl object linked to the AudioRecord audio session, and then just set it disabled:
if (AutomaticGainControl.isAvailable()) {
AutomaticGainControl agc = AutomaticGainControl.create(
myAudioRecord.getAudioSessionId()
);
agc.setEnabled(false);
}
Note: Most of the audio sources (including DEFAULT) apply processing to the audio signal. To record raw audio select UNPROCESSED. Some devices do not support unprocessed input. Call AudioManager.getProperty("PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED") first to verify it's available. If it is not, try using VOICE_RECOGNITION instead, which does not employ AGC or noise suppression. You can use UNPROCESSED as an audio source even when the property is not supported, but there is no guarantee whether the signal will be unprocessed or not in that case.
Android documentation Link https://developer.android.com/guide/topics/media/mediarecorder.html#example
AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
if(audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED) !=null)
mRecorder.setAudioSource(MediaRecorder.AudioSource.UNPROCESSED);
else
mRecorder.setAudioSource(MediaRecorder.AudioSource.VOICE_RECOGNITION);
MIC should be fine, and for the rest you need to know if they are supported.
I've made a class for this:
enum class AudioSource(val audioSourceValue: Int, val minApi: Int) {
VOICE_CALL(MediaRecorder.AudioSource.VOICE_CALL, 4), DEFAULT(MediaRecorder.AudioSource.DEFAULT, 1), MIC(MediaRecorder.AudioSource.MIC, 1),
VOICE_COMMUNICATION(MediaRecorder.AudioSource.VOICE_COMMUNICATION, 11), CAMCORDER(MediaRecorder.AudioSource.CAMCORDER, 7),
VOICE_RECOGNITION(MediaRecorder.AudioSource.VOICE_RECOGNITION, 7),
VOICE_UPLINK(MediaRecorder.AudioSource.VOICE_UPLINK, 4), VOICE_DOWNLINK(MediaRecorder.AudioSource.VOICE_DOWNLINK, 4),
#TargetApi(Build.VERSION_CODES.KITKAT)
REMOTE_SUBMIX(MediaRecorder.AudioSource.REMOTE_SUBMIX, 19),
#TargetApi(Build.VERSION_CODES.N)
UNPROCESSED(MediaRecorder.AudioSource.UNPROCESSED, 24);
fun isSupported(context: Context): Boolean =
when {
Build.VERSION.SDK_INT < minApi -> false
this != UNPROCESSED -> true
else -> {
val audioManager: AudioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
Build.VERSION.SDK_INT >= Build.VERSION_CODES.N && "true" == audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED)
}
}
companion object {
fun getAllSupportedValues(context: Context): ArrayList<AudioSource> {
val values = AudioSource.values()
val result = ArrayList<AudioSource>(values.size)
for (value in values)
if (value.isSupported(context))
result.add(value)
return result
}
}
}

Categories

Resources