Get video resolution In ExoPlayer - android

How to get video resolution from video Url according to net speed like in you tube
it automatically gets 128 p,360 p using Exo player.

The core points you'll want to look into are around track selection (via the TrackSelector) as well as the TrackSelectionHelper. I'll include the important code samples below which will hopefully be enough to get you going. But ultimately just following something similar in the demo app will get you where you need to be.
You'll hold onto the track selector you init the player with and use that for just about everything.
Below is just a block of code to ideally cover the gist of what you're trying to do since the demo does appear to over-complicate things a hair. Also I haven't run the code, but it's close enough.
// These two could be fields OR passed around
int videoRendererIndex;
TrackGroupArray trackGroups;
// This is the body of the logic for see if there are even video tracks
// It also does some field setting
MappedTrackInfo mappedTrackInfo = trackSelector.getCurrentMappedTrackInfo();
for (int i = 0; i < mappedTrackInfo.length; i++) {
TrackGroupArray trackGroups = mappedTrackInfo.getTrackGroups(i);
if (trackGroups.length != 0) {
switch (player.getRendererType(i)) {
case C.TRACK_TYPE_VIDEO:
videoRendererIndex = i;
return true;
}
}
}
// This next part is actually about getting the list. It doesn't include
// some additional logic they put in for adaptive tracks (DASH/HLS/SS),
// but you can look at the sample for that (TrackSelectionHelper#buildView())
// Below you'd be building up items in a list. This just does
// views directly, but you could just have a list of track names (with indexes)
for (int groupIndex = 0; groupIndex < trackGroups.length; groupIndex++) {
TrackGroup group = trackGroups.get(groupIndex);
for (int trackIndex = 0; trackIndex < group.length; trackIndex++) {
if (trackIndex == 0) {
// Beginning of a new set, the demo app adds a divider
}
CheckedTextView trackView = ...; // The TextView to show in the list
// The below points to a util which extracts the quality from the TrackGroup
trackView.setText(DemoUtil.buildTrackName(group.getFormat(trackIndex)));
}
// Assuming you tagged the view with the groupIndex and trackIndex, you
// can build your override with that info.
Pair<Integer, Integer> tag = (Pair<Integer, Integer>) view.getTag();
int groupIndex = tag.first;
int trackIndex = tag.second;
// This is the override you'd use for something that isn't adaptive.
override = new SelectionOverride(FIXED_FACTORY, groupIndex, trackIndex);
// Otherwise they call their helper for adaptives, which roughly does:
int[] tracks = getTracksAdding(override, trackIndex);
TrackSelection.Factory factory = tracks.length == 1 ? FIXED_FACTORY : adaptiveTrackSelectionFactory;
override = new SelectionOverride(factory, groupIndex, tracks);
// Then we actually set our override on the selector to switch the quality/track
selector.setSelectionOverride(rendererIndex, trackGroups, override);

Related

Cursor is looping but repeats only the first value

I created a loop that will get the data from my cursor, however I noticed that even though it is looping(with the correct count) it only shows the first value.
int vv = 0;
if ((CR3.moveToFirst()) || CR3.getCount() !=0){
while (CR3.isAfterLast() == false) {
vendoName[vv] = CR3.getString(0);
vendoEsch[vv] = CR3.getString(1);
vendoAsch[vv] = CR3.getString(2);
vendoTag[vv] = CR3.getString(3);
vv++;
CR3.moveToNext();
}}
and when I call all my data( I only need the first three records)
ArrayList<SearchResults2> results2 = new ArrayList<SearchResults2>();
SearchResults2 sr2 = new SearchResults2();
for(int j = 0;j < 3;j++)
{
sr2.setName(vendoName[j]);
sr2.setEsch(vendoEsch[j]);
sr2.setAsch(vendoAsch[j]);
sr2.setTag(vendoTag[j]);
results2.add(sr2);
}
I am putting inside a listview, when I check, it is showing always the first data.
This is an example I used as a reference to my code(It's almost the same except that I used an array to put my data from)
http://geekswithblogs.net/bosuch/archive/2011/01/31/android---create-a-custom-multi-line-listview-bound-to-an.aspx
Am I doing something wrong which is why it is only getting the first piece of data?
Is it not easier to do something like this (if you don't need more than 3 results):
ArrayList<SearchResults2> results2 = new ArrayList<SearchResults2>();
CR3.moveToFirst();
for (i = 0; i < 3; i++) {
SearchResults2 sr2 = new SearchResults2();
sr2.setName(CR3.getString(0));
sr2.setEsch(CR3.getString(1));
sr2.setAsch(CR3.getString(2));
sr2.setTag(CR3.getString(3));
results2.add(sr2);
CR3.moveToNext();
}
I think that maybe the cursor doesn't iterate properly through your results in your while-loop and that's why you become one and the same result for the three items

Views not removing as expected

So this function is intended to remove a Unit object, and its corresponding view from a list of views. It then checks the rest of the Units, and if the previously removed one was a pre-requisite for another Unit, it recurses and removes that unit too, and so on.
The Units are removed correctly from storage, and displayed back in the pending display, but not all of the removed units' views are removed from "temp".
Edit: The function is working entirely when a maximum of one unit is in each semester, but leaves things behind otherwise.
Any insight you could offer into why this is occuring would be muchly appreciated.
void removeLinkedUnits(Unit inUnit)
{
int sem;
for(sem = semesters.size() - 1; sem >= 0; sem--) //iterate through each semester containing units
{
int unit;
for(unit = semesters.get(sem).getUnits().size() - 1; unit >= 0; unit--) //iterate through each unit in a semester
{
String[] pres = semesters.get(sem).getUnit(unit).getPrerequisites();
int i;
boolean toRemove = false;
for(i = 0; i < pres.length; i++) //Compare list of pre-requisites against removed unit.
{
if(pres[i].contains(inUnit.getUnitID()))
{
toRemove = true;
}
}
if(semesters.get(sem).getUnits().get(unit).getCorequisites().contains(inUnit.getUnitID()))
{
toRemove = true;
}
if(toRemove) //Unit relies on previously removed unit
{
Unit unitx = semesters.get(sem).getUnit(unit);
semesters.get(sem).remove(unitx);
LinearLayout temp = vertUnitLayouts.get(sem);
temp.removeViewAt(unit);
scheduledUnits.remove(unitx.getUnitID());
removeLinkedUnits(unitx);
redrawPendingSpinners();
pendingUnits.add(unitx);
LinearLayout pendingLinear = (LinearLayout) findViewById(R.id.pendingLinear);
pendingLinear.addView(makePendingView(unitx));
}
}
}
}
Figured out the problem.
temp (vertUnitsLayout) contained 2 items per Unit, not the 1 I was expecting. Misunderstood how my partner was constructing his views.
I need to use debugger more :S
Replaced:
temp.removeViewAt(unit);
With:
temp.removeViewAt(unit*2);
temp.removeViewAt(unit*2);

Implementing a multiple input filter graph with the Libavfilter library in Android NDK

I am trying to use the overlay filter with multiple input sources, for an Android app. Basically, I want to overlay multiple video sources on top of a static image.
I have looked at the sample that comes with ffmpeg and implemented my code based on that, but things don't seem to be working as expected.
In the ffmpeg filtering sample there seems to be a single video input. I have to handle multiple video inputs and I am not sure that my solution is the correct one. I have tried to find other examples, but looks like this is the only one.
Here is my code:
AVFilterContext **inputContexts;
AVFilterContext *outputContext;
AVFilterGraph *graph;
int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts, char *filters)
{
int i;
int returnCode;
char args[512];
char name[9];
AVFilterInOut **graphInputs = NULL;
AVFilterInOut *graphOutput = NULL;
AVFilter *bufferSrc = avfilter_get_by_name("buffer");
AVFilter *bufferSink = avfilter_get_by_name("buffersink");
graph = avfilter_graph_alloc();
if(graph == NULL)
return -1;
//allocate inputs
graphInputs = av_calloc(inputCount + 1, sizeof(AVFilterInOut *));
for(i = 0; i <= inputCount; i++)
{
graphInputs[i] = avfilter_inout_alloc();
if(graphInputs[i] == NULL)
return -1;
}
//allocate input contexts
inputContexts = av_calloc(inputCount + 1, sizeof(AVFilterContext *));
//first is the background
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=1/1:pixel_aspect=0", bgFrame->width, bgFrame->height, bgFrame->format);
returnCode = avfilter_graph_create_filter(&inputContexts[0], bufferSrc, "background", args, NULL, graph);
if(returnCode < 0)
return returnCode;
graphInputs[0]->filter_ctx = inputContexts[0];
graphInputs[0]->name = av_strdup("background");
graphInputs[0]->next = graphInputs[1];
//allocate the rest
for(i = 1; i <= inputCount; i++)
{
AVCodecContext *codecCtx = codecContexts[i - 1];
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
codecCtx->time_base.num, codecCtx->time_base.den,
codecCtx->sample_aspect_ratio.num, codecCtx->sample_aspect_ratio.den);
snprintf(name, sizeof(name), "video_%d", i);
returnCode = avfilter_graph_create_filter(&inputContexts[i], bufferSrc, name, args, NULL, graph);
if(returnCode < 0)
return returnCode;
graphInputs[i]->filter_ctx = inputContexts[i];
graphInputs[i]->name = av_strdup(name);
graphInputs[i]->pad_idx = 0;
if(i < inputCount)
{
graphInputs[i]->next = graphInputs[i + 1];
}
else
{
graphInputs[i]->next = NULL;
}
}
//allocate outputs
graphOutput = avfilter_inout_alloc();
returnCode = avfilter_graph_create_filter(&outputContext, bufferSink, "out", NULL, NULL, graph);
if(returnCode < 0)
return returnCode;
graphOutput->filter_ctx = outputContext;
graphOutput->name = av_strdup("out");
graphOutput->next = NULL;
graphOutput->pad_idx = 0;
returnCode = avfilter_graph_parse_ptr(graph, filters, graphInputs, &graphOutput, NULL);
if(returnCode < 0)
return returnCode;
returnCode = avfilter_graph_config(graph, NULL);
return returnCode;
return 0;
}
The filters argument of the function is passed on to avfilter_graph_parse_ptr and it can looks like this: [background] scale=512x512 [base]; [video_1] scale=256x256 [tmp_1]; [base][tmp_1] overlay=0:0 [out]
The call breaks after the call to avfilter_graph_config with the warning:
Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination and the error Invalid argument.
What is it that I am not doing correctly?
EDIT: The are two issues that I have discovered:
Looks like the description of avfilter_graph_parse_ptr is a bit vague. The ouputs parameter represents a list of the current outputs of the graph, in my case that being the graphInputs variable, because these are the outputs from the buffer filter. The inputs parameter represents a list of the current inputs of the graph, in this case this is the graphOutput variable, because it represents the input to the buffersink filter.
I did some testing with a scale filter and a single input. It seems that the name of the AVFilterInOut structure required by avfilter_graph_parse_ptr needs to be in. I have tried with different versions: in_1, in_link_1. None of them work and I have not been able to find any documentation related to this.
So the issue still remains. How do I implement a filter graph with multiple inputs?
I have found a simple solution to the problem.
This involves replacing the avfilter_graph_parse_ptr with avfilter_graph_parse2 and adding the buffer and buffersink filters to the filters parameter of avfilter_graph_parse2.
So, in the simple case where you have one background image and one input video the value of the filters parameter should look like this:
buffer=video_size=1024x768:pix_fmt=2:time_base=1/25:pixel_aspect=3937/3937 [in_1]; buffer=video_size=1920x1080:pix_fmt=0:time_base=1/180000:pixel_aspect=0/1 [in_2]; [in_1] [in_2] overlay=0:0 [result]; [result] buffersink
The avfilter_graph_parse2 will make all the graph connections and initialize all the filters. The filter contexts for the input buffers and for the output buffer can be retrieved from the graph itself at the end. These are used to add/get frames from the filter graph.
A simplified version of the code looks like this:
AVFilterContext **inputContexts;
AVFilterContext *outputContext;
AVFilterGraph *graph;
int initFilters(AVFrame *bgFrame, int inputCount, AVCodecContext **codecContexts)
{
int i;
int returnCode;
char filters[1024];
AVFilterInOut *gis = NULL;
AVFilterInOut *gos = NULL;
graph = avfilter_graph_alloc();
if(graph == NULL)
{
printf("Cannot allocate filter graph.");
return -1;
}
//build the filters string here
// ...
returnCode = avfilter_graph_parse2(graph, filters, &gis, &gos);
if(returnCode < 0)
{
cs_printAVError("Cannot parse graph.", returnCode);
return returnCode;
}
returnCode = avfilter_graph_config(graph, NULL);
if(returnCode < 0)
{
cs_printAVError("Cannot configure graph.", returnCode);
return returnCode;
}
//get the filter contexts from the graph here
return 0;
}
I cant add a comment so i would just like to add you can fix "Output pad "default" with type video of the filter instance "background" of buffer not connected to any destination" by not having a sink at all. The filter will automatically make the sink for you. So you are adding too many pads
For my case I had a transformation like this:
[0:v]pad=1008:734:144:0:black[pad];[pad][1:v]overlay=0:576[out]
If you try ffmpeg from command line, it will work:
ffmpeg -i first.mp4 -i second.mp4 -filter_complex "[0:v]pad=1008:734:144:0:black[pad];[pad][1:v]overlay=0:576[out]" -map "[out]" -map 0:a output.mp4
Basically, increasing the overall size of first video, then overlapping the second one. After a long try, same problems as this thread, I got it working. The video filtering example from FFMPEG documentation (https://ffmpeg.org/doxygen/2.1/doc_2examples_2filtering_video_8c-example.html) works fine, and after digging into it, this went fine:
filterGraph = avfilter_graph_alloc();
NULLC(filterGraph);
bufferSink = avfilter_get_by_name("buffersink");
NULLC(bufferSink);
filterInput = avfilter_inout_alloc();
AVBufferSinkParams* buffersinkParams = av_buffersink_params_alloc();
buffersinkParams->pixel_fmts = pixelFormats;
FFMPEGHRC(avfilter_graph_create_filter(&bufferSinkContext, bufferSink, "out", NULL, buffersinkParams, filterGraph));
av_free(buffersinkParams);
filterInput->name = av_strdup("out");
filterInput->filter_ctx = bufferSinkContext;
filterInput->pad_idx = 0;
filterInput->next = NULL;
filterOutputs = new AVFilterInOut*[inputFiles.size()];
ZeroMemory(filterOutputs, sizeof(AVFilterInOut*) * inputFiles.size());
bufferSourceContext = new AVFilterContext*[inputFiles.size()];
ZeroMemory(bufferSourceContext, sizeof(AVFilterContext*) * inputFiles.size());
for (i = inputFiles.size() - 1; i >= 0 ; i--)
{
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
videoCodecContext[i]->width, videoCodecContext[i]->height, videoCodecContext[i]->pix_fmt, videoCodecContext[i]->time_base.num, videoCodecContext[i]->time_base.den, videoCodecContext[i]->sample_aspect_ratio.num, videoCodecContext[i]->sample_aspect_ratio.den);
filterOutputs[i] = avfilter_inout_alloc();
NULLC(filterOutputs[i]);
bufferSource = avfilter_get_by_name("buffer");
NULLC(bufferSource);
sprintf(args2, outputTemplate, i);
FFMPEGHRC(avfilter_graph_create_filter(&bufferSourceContext[i], bufferSource, "in", args, NULL, filterGraph));
filterOutputs[i]->name = av_strdup(args2);
filterOutputs[i]->filter_ctx = bufferSourceContext[i];
filterOutputs[i]->pad_idx = 0;
filterOutputs[i]->next = i < inputFiles.size() - 1 ? filterOutputs[i + 1] : NULL;
}
FFMPEGHRC(avfilter_graph_parse_ptr(filterGraph, description, &filterInput, filterOutputs, NULL));
FFMPEGHRC(avfilter_graph_config(filterGraph, NULL));
The type of variables are the same as in example above, the args and args2 are char[512], where outputTemplate is "%d:v", basically the input video IDs from filtering expression. Couple of things to watch-out:
The video information in args, needs to be correct, time_base and sample_aspect_ration are copied from the video stream of format context.
Indeed the input, is what is for us output, and the other way around
The name of the filter is "in" for all our input filters(filterOutputs)

Android: Sorting mp3 files by title in a clickable listview

I am attempting to create a music player app for a Nexus 7 tablet. I am able to retrieve music files from a specific directory as well as information that is associated with them such as title, artist, etc. I have loaded the files's titles into a clickable list view. When the user clicks on a title, it takes them to an activity that plays the associated song. I was attempting to sort the titles alphabetically and ran into a snag. When I sort just the titles, they no longer match with the correct songs. This is to be expected since I only ordered the titles and not the actual files. I attempted to modify the sorting algorithm by doing this:
//will probably only work for Nexus 7
private final static File fileList = new File("/storage/emulated/0/Music/");
private final static File fileNames[] = fileList.listFiles(); //get list of files
public static void sortFiles()
{
int j;
boolean flag = true;
File temp;
MediaMetadataRetriever titleMMR = new MediaMetadataRetriever();
MediaMetadataRetriever titleMMR2 = new MediaMetadataRetriever();
while(flag)
{
flag = false;
for(j = 0; j < fileNames.length - 1; j++)
{
titleMMR.setDataSource(fileNames[j].toString());
titleMMR2.setDataSource(fileNames[j+1].toString());
if(titleMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE).compareToIgnoreCase(titleMMR2.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE)) > 0)
{
temp = fileNames[j];
fileNames[j] = fileNames[j+1]; // swapping
fileNames[j+1] = temp;
flag = true;
}//end if
}//end for
}//end while
}
This is supposed to retrieve the song titles from two files, compare them, and swap the files in the File array if the first comes after the second alphabetically. For some reason, when I run the activity that calls this method, the app crashes. If I remove the + 1 from titleMMR2's data source
titleMMR2.setDataSource(fileNames[j].toString());
the app no longer crashes but the list is not in order. Again this is understandable since it compares the song titles to themselves. I don't know why the + 1 would make the program crash. It is not an array out of bounds error. There are a total of 6 .mp3 files in the directory and they are the only files in that directory. I have also tried using Arrays.sort(fileNames) but that only orders them by their file name and not song title. I also tried this:
Arrays.sort(fileNames, new Comparator<File>(){
public int compare(File f1, File f2)
{
titleMMR.setDataSource(f1.toString());
titleMMR2.setDataSource(f2.toString());
return titleMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE).compareToIgnoreCase(titleMMR2.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE));
} });
That snippet also resulted in a crash. There are no errors in the java code and all appropriate classes have been imported. I'm really at a loss as to what is wrong. Any help will be appreciated and if any new info is needed I will gladly provide it. Thanks in advance.
FIXED
The correct code snip is this:
public static void sortFiles()
{
int j;
boolean flag = true;
File temp;
MediaMetadataRetriever titleMMR = new MediaMetadataRetriever();
MediaMetadataRetriever titleMMR2 = new MediaMetadataRetriever();
while(flag)
{
flag = false;
for(j = 0; j < fileNames.length - 1; j++)
{
titleMMR.setDataSource(fileNames[j].toString());
titleMMR2.setDataSource(fileNames[j+1].toString());
String title1;
String title2;
if(titleMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE) == null)
title1 = fileNames[j].getName();
else
title1 = titleMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE);
if(titleMMR2.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE) == null)
title2 = fileNames[j+1].getName();
else
title2= titleMMR2.extractMetadata(MediaMetadataRetriever.METADATA_KEY_TITLE);
if(title1.compareToIgnoreCase(title2) > 0)
{
temp = fileNames[j];
fileNames[j] = fileNames[j+1]; // swapping
fileNames[j+1] = temp;
flag = true;
}//end if
}//end for
}//end while
}

How to load a 3d object in Android?

I have the .obj file and .mtl file but don't know how to import it. I have tried the tutorials, they aren't working.
One way to load an Object from an .obj file is to first get the inputstream from the file, then you go through each line, since one line consists of one vertex. The part where you then use youre data from the .obj file to acctually draw it is something that you easily can find a bunch of tutorials on online, one example: http://www.droidnova.com/android-3d-game-tutorial-part-i,312.html . If you want to get deeper into Opengl and acctually learn alot you can use Nehe the allmightys online tutorial that covers very much and goes quite deep into OpenGL programming: http://nehe.gamedev.net/ .
Anyway, the .obj file loading could start of by something like this:
while ((line = reader.readLine()) != null)
{
if (line.startsWith("f"))
{
faces++;
processFLine(line);
} else if (line.startsWith("vn"))
{
normals++;
processVNLine(line);
} else if (line.startsWith("vt"))
{
UVCoords++;
processVTLine(line);
} else if (line.startsWith("v"))
{
vertices++;
processVLine(line);
}
}
For everything except when line starts with 'f' it's quite straight forward, you just store thoose two or three values in something, I preferred a Vector:
private void processVNLine(String line)
{
String[] tokens = line.split("[ ]+");
int c = tokens.length;
for (int i = 1; i < c; i++)
{ // add the normals to the normal vector
_vn.add(Float.valueOf(tokens[i]));
}
}
For the faces part (where line starts with 'f', you would preferably first check how many '/' each vertex consists of:
private void processFLine(String line)
{
String[] tokens = line.split("[ ]+");
int c = tokens.length;
if (tokens[1].matches("[0-9]+"))
{
caseFEqOne(tokens, c);
}
if (tokens[1].matches("[0-9]+/[0-9]+"))
{
caseFEqTwo(tokens, c);
}
if (tokens[1].matches("[0-9]+//[0-9]+"))
{
caseFEqOneAndThree(tokens, c);
}
if (tokens[1].matches("[0-9]+/[0-9]+/[0-9]+"))
{
caseFEqThree(tokens, c);
}
}
Each face is built up by three vertex, where each vertex indexes is v/vt/vn.
This is what could happen if you have v/vt. It stores the indices in seperate vectors.
private void caseFEqTwo(String[] tokens, int c)
{
for (int i = 1; i < c; i++)
{
Short s = Short.valueOf(tokens[i].split("/")[0]);
s--;
_vPointer.add(s);
s = Short.valueOf(tokens[i].split("/")[1]);
s--;
_vtPointer.add(s);
}
}
Now you have handled this by adding the indices into sperarate Vectors or arrays.
Now you should have Vectors that consists of coordinates and vector that consists of indices for each type(v, vt or vn). You should as well have the number of faces.
By knowing the number of faces you can now start a loop that goes from 0 to number of faces.
If you now create new vectors that consists of the final result, you can in this loop move the coordinates at the index from the loop index to the loop index in the results vectors.
This may need some explenation maybe so here is a simple example:
private void reArrange()
{
Iterator<Short> i;
short s;
i = _vPointer.iterator();
while (i.hasNext())
{
s = (short) (i.next() * 3);
for (int k = 0; k < 3; k++)
{
_vResult.add(_v.get(s + k));
}
}
i = _vnPointer.iterator();
while (i.hasNext())
{
s = (short) (i.next() * 3);
for (int k = 0; k < 3; k++)
{
_vnResult.add(_vn.get(s + k));
}
}
i = _vtPointer.iterator();
while (i.hasNext())
{
s = (short) (i.next() * 2);
for (int k = 0; k < 2; k++)
{
_vtResult.add(1f - _vt.get(s + k));
}
}
_indices = new short[faces * 3];
for (short k = 0; k < faces * 3; k++)
{
_indices[k] = k;
}
}
Now your'e acctually done and have the information you need from the .obj file. So by converting thoose vectors to FloatBuffers and by creating a ShortBuffer that is as simple as 0,1,2,3,4,5 ... and so on you can use thoose with OpenGL-ES to draw your'e object.
(This is not completly written by me, I used a basic concept i found somewhere online, can't find now tho, and then made it fit nicely to what i wanted and adjusted it abit to get easier to understand)
What this doesn't cover is the part where you also load a .mtl file that you oftenly will get as well while exporting an Object from a program. If you want to implement that as well you could have a look at this: http://people.sc.fsu.edu/~jburkardt/data/mtl/mtl.html
This is something that can be good to have if your'e building an Object in a program and want a more exact replica where this will give you values for how light should be handled to materials as well.
If you have any questions, feel free to ask them.
(I know that this might not be the optimal way of loading a .obj file since we use up to 9 different vectors but if you use serialization as well this might acctually get quite fast.)
OpenGL is not a scene graph library. It provides you a set of sophisticated drawing tools, but nothing more. Loading 3D models, scene management, etc. are all left for you to implement, or use a library for.
it's far late now ...but still helpful solution :
min3D engine help to load 3d model in android
Following are syeps:
1) Download Rajawali lib.
2) Create an RenderActivity
private Object3dContainer faceObject3D;
private Object3dContainer objModel;
#Override
public void initScene() {
scene.lights().add(new Light());
scene.lights().add(new Light());
Light myLight = new Light();
myLight.position.setZ(150);
scene.lights().add(myLight);
IParser parser = Parser.createParser(Parser.Type.OBJ,
getResources(), "com.azoi.opengltutor:raw/monster_high", true);
parser.parse();
objModel = parser.getParsedObject();
objModel.scale().x = objModel.scale().y = objModel.scale().z = .7f;
scene.addChild(objModel);
}
add ur 3d model obj file in res/raw directory...here it's monster_high
That's it...register activity in Manifest and run :) .
if you want to move 3d model override updateScene(..):
public void updateScene() {
super.updateScene();
objModel.rotation().x++;
objModel.rotation().y--;
objModel.rotation().z++;
}
more on here http://code.google.com/p/min3d/wiki/HowToLoadObjFile Enjoy :)

Categories

Resources