setBfree

Preface

Overview

LV2

OpenGL

PUGL

OpenGL

Here we're going put our OpenGL method in context of characteristics that are turned on and off, that define how our base vector drawings, and textures(bitmap images), look. In future drafts of this doc, we'll go into more detail of what's going on with the matrices, and the methodology of how these characteristics fit together, but for now we'll just describe what most of them do towards creating our final drawing.

The first emphasis to make is that these characteristics really do a number on these base drawing components. Think of the vectors and textures as the iron structure in a 20th century sky scraper, and the characteristics as everything else. In the Overview we talked about using mesh vector drawings, but in this elaboration we'll describe other vector techniques we use as well as textures. We'll weave the characteristics through our description of each drawing realm, and then go into the traits used for combining them, and the rest of the scene, particularly characteristics for lighting and the material the meshes and textures are on. We'll start with textures.

Each texture is a structure in its own .c file in the textures directory. We'll follow the journey of the drawbar texture until its ready for our program to use. #include "textures/drawbar.c" brings this structure into ui.c:

static const struct {
unsigned int width;
unsigned int height;
unsigned int bytes_per_pixel; /* 3:RGB, 4:RGBA */
unsigned char pixel_data[60 * 650 * 4 + 1];
} drawbar_image = {
60, 650, 4,
"\0\0\0\377\0\0\0\ ...}

So our drawbar texture has a width of 60, and height of 650, and 4 makes it in RGBA color space. RGBA has an alpha(transparency) channel added to the Red,Green and Blue channels of RGB. The lengthy combinations of \, zeros and numbers are the pixel_data array with the size of the width/height area with one pixel for each RGBA datum.

How these textures are stored is set with glPixelStorei(GL_UNPACK_ALIGNMENT, 1) and they are brought into the program by the function CIMAGE, called from initTextures. Here, glGenTextures(1, &ui->texID[ID]) gives the texture a name that can be queried from the ID. To make it current for rendering or asigning characteristics it must be bound, glBindTexture(GL_TEXTURE_2D, ui->texID[ID]). Here we see our first characteristic as GL_TEXTURE_2D sets the texture as two dimensional. The characteristics the texture is given here will be discussed later, but the function call, glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, VARNAME.width, VARNAME.height, 0,(VARNAME.bytes_per_pixel == 3 ? GL_RGB : GL_RGBA), GL_UNSIGNED_BYTE, VARNAME.pixel_data), gets the pixel data and info from its structure into our program. Now the texture can be used anywhere in our program by binding it again and it will have the data with its base characteristics ready for action.

Now we'll start getting into the meshes. OpenGL is basically about drawing triangles in a three dimensional space. There is more than one method for storing and retrieving the vertices that make up these triangles for meshes.

We use a technique that utilizes indexing where all of our vertices(coordinates) are stored in one big array and then the vertices needed for our object are located by indexing them from the larger array. We'll explain this by showing how they are used in our program.

Our vertex, index, and offset data are in ui_model.h. The vertices array holds one vertex_struct structure for each set of eight floats that make up a vertex. There is another array indexes that holds the integer locations in the vertices array for each object's faces, which are the the visible vertices. Again all the indexes for every object's vertices is in one array. There are two arrays of integers, indices_offset_table, and vertex_offset_table that hold the number for the indices and vertices of each object. Finally there are arrays for how many indices there are for each object, faces_count, and one for how many vertices, vertex_count.

Putting this all together, all the vertices are in one array. The indices say which vertices to use for a specific object. The offset tables say where to begin taking the indices and vertices from, and the count arrays say how many indices and vertices to take starting from where the offset table places us.

This seems like a lot of work, but it allows the one set of vertex data to be used where there are common vertices making up a shape, so the overall amount of data is less than having discreet sets of data for each object. It also allows the switching and mutating of objects more conveniently than loading, deleting and re-loading their data into memory. Now we can get to how or program in ui.c handles this method.

The number of objects is declared in ui_model.h, #define OBJECTS_COUNT 17. In our friend ui structure, we declare two arrays for vertices and indices, GLuint * vbo; and GLuint * vinx; respectively. Note that GLuint is the OpenGL integer. Beyond the reason for its existence, it makes it easier to figure out which ints are for your OpenGL functionality.

In the function initMesh two sets of buffers are created for each object. The vertices buffers are created with glGenBuffers(OBJECTS_COUNT, ui->vbo). For each OBJECTS_COUNT we have a glBindBuffer(GL_ARRAY_BUFFER, ui->vbo[i]) where from then on that i element in the vbo array will locate the buffer with its data. This data is assigned with, glBufferData(GL_ARRAY_BUFFER, sizeof (struct vertex_struct) * vertex_count[i], &vertices[vertex_offset_table[i]], GL_STATIC_DRAW). Here the amount of data is determined by the vertex_count[i] integer, and the beginning of the data is the vertices element from the offset array, &vertices[vertex_offset_table[i].

The same process is performed again for the indices array, except using its matching faces_count and indices_offset_table arrays. Also in the glBufferData declaration of the data, GL_ELEMENT_ARRAY_BUFFER is substituted for GL_ARRAY_BUFFER used for the vertices. This is important because it tells the program that this is part of the indexing method for creating objects for drawing.

Now when we want to draw a mesh we only need to bind the elements of the two arrays, glBindBuffer(GL_ARRAY_BUFFER, ui->vbo[index]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ui->vinx[index]) After jumping through a few hoops, the money ball is shot with glDrawElements(GL_TRIANGLES, faces_count[index] * 3, INX_TYPE, BUFFER_OFFSET(0)). The bound vertices referenced by their indices are drawn to an area the size of the face count times three as its a triangle. Bang, and whew!

Now we're into the characteristics that are assigned to our texture and mesh drawings with a very important feature nestled in a function in the above "jumping through a few hoops" section drawMesh, glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer(GL_FLOAT, sizeof (struct vertex_struct), BUFFER_OFFSET(3 * sizeof (float))). Normalizing is very important because it creates smooth lines from the jaggedness of lumping a bunch of triangles together to create a picture. This is the first of a plethora of characteristics we're about to spew out. Fasten your seat-belts as condensing all this into a few paragraphs really lays out the ratio of total control to quantity of facors needed to deal with in OpenGL:

We'll start with a funky technique with textures declared, glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)... glGenerateMipmapEXT(GL_TEXTURE_2D). We'll see glTexParameteri a few times so just know it sets parameters. Here mipmap functionality is turned on. For each texture there are multiple versions in different sizes and as you zoom in or out, instead of the pixels getting messed up as the size changes, a larger or smaller texture is substituted.

Now we'll go through several more characteristics that we'll lump into general categories. These are very loose groupings designed to give them some context. We'll go with characteristics that effect the quality and performance for the rendering, how pixels are dealt with before being rendered, and finally how different textures, meshes and the like are dealt with when combined before rendering.

An example of a performance helper is glEnable(GL_CULL_FACE) in setupOpenGL. This cuts the number of triangles in half or more as many not in view are removed. In the same function we have a number of glHint, GL_NICEST combinations for the features GL_PERSPECTIVE_CORRECTION_HINT, GL_POLYGON_SMOOTH_HINT, GL_LINE_SMOOTH_HINT, GL_POINT_SMOOTH_HINT, GL_GENERATE_MIPMAP_HINT, and GL_FOG_HINT. The GL_NICEST says use the highest quality option when there are options for this feature.

These quality and performance helpers are far more general than most. Again in OpenGL we get specific on what we adjust. Our ride accelerates as we get into characteristics for how pixels are drawn. Here we go:

In initTextures we have glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR). GL_TEXTURE_MAG_FILTER deals with how pixels are mapped to textures, and GL_LINEAR averages the four pixels closest to the pixel which helps reduce the sharpness between different texture elements.

In setupOpenGL we have glEnable(GL_DITHER) for color dithering, glEnable(GL_MULTISAMPLE) for antialiasing, glEnable(GL_RESCALE_NORMAL) adds a scaling factor when we normalize, glEnable(GL_POLYGON_SMOOTH) and glEnable (GL_LINE_SMOOTH) for smoothing lines and polygons, glShadeModel(GL_SMOOTH) for different colors when vertices are interpolated, and glEnable(GL_MULTISAMPLE_ARB) for smoothing with multi sampling with anti aliasing.

For how pixels are combined in setupOpenGL we have glDisable(GL_BLEND), and glBlendFunc(GL_SRC_ALPHA, GL_SRC_ALPHA_SATURATE). GL_BLEND is enabled pretty much every time something is rendered. This is how incoming colors mix with those already in the buffer ready to draw. The GL_SRC_ALPHA helps with transparency, while GL_SRC_ALPHA_SATURATE is more specifily geared for nearest to farthest factors with alpha imaging.

We also have glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL). This declares that as new triangles come in, if their depth is lower than a higher one already in the buffer, the one on top will still be rendered.

In initTextures we have glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE). First we'll comment on the level of specifics as the S in GL_TEXTURE_WRAP_S refers to the x coordinate while T and R are the y and z coordinates. So we can set different parameters for the x,y and z coordinates. In our program we have these paramaters in triplicate for the S, T and R with the same values for each. Here GL_REPEAT allows drawing if the texture is outside of the vertices, and GL_CLAMP_TO_EDGE keeps a border from being drawn where the edge of the texture meets what it is drawn on.

All right we've got some sense of the ocean of characteristics that need to be rowed through, so we'll move onto elaborating on the lighting discussion in the intro before combining these different components to create our final drawings.

Colors are stored in arrays with values for the red, blue, green and alpha, const GLfloat light0_ambient[] = { 0.2, 0.15, 0.1, 1.0 }; and these are passed to the two main factors for lighting characteristics.

Some characteristics are passed into glLightfv with their respective colors. GL_AMBIENT is for light that is scattered everywhere before it reaches its source, GL_DIFFUSE is for the scattering of light from its specific direction, and GL_SPECULAR is for light that bounces in relation to the direction it came from.

Others aren't about color, but what happens to light after the Modelview position is set, but we'll talk about them when we briefly discuss the process of projection.

Not only do you control the how light is cast, the properties of the surface it lands on are set as well. The cousin to glLightfv for surfaces is glMaterialfv. The settings for these are the same GL_DIFFUSE, GL_AMBIENT, and GL_EMISSION, for example, glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, color). GL_FRONT_AND_BACK makes sure the setting applies to all the surface where just having front or back would be better for performance, but some of the area might not be included, depending on other characteristics.

We'll begin our discussion of how characteristics are combined with how an active key on an organ keyboard is displayed. Here is how a white note on the keyboard is drawn, drawMesh(view, OBJ_WHITE_KEY0, 1). If the note is pressed, the surface color is changed to red, glMaterialfv(GL_FRONT, GL_EMISSION, glow_red), where glow_red is a color array for red, glow_red[] = { 1.0, 0.0, 0.00, 1.0 }.

Now we'll focus on how different vector and texture drawing techniques are combined and rendered. We'll start with how textures are drawn over a simple vector box. The unity_box function creates one of these vector boxes used as a framework to write text over. Here is the core of the vector drawing from it, where GL_QUADS is a convenience to create a square/rectangle instead of creating it from two triangles:

glBegin(GL_QUADS);
glVertex3f(x0, y0 * invaspect, .1);
glVertex3f(x0, y1 * invaspect, .1);
glVertex3f(x1, y1 * invaspect, .1);
glVertex3f(x1, y0 * invaspect, .1);
glEnd();

In this context, this is often used in tandem with how we render text. We use the FTGL library for drawing text with OpenGL. Here are two core calls that are used to render text over one of our vector squares above; first a box is created, and then the text is drawn in it, ftglGetFontBBox(font, text, -1, bb); ftglRenderFont(font, text, FTGL_RENDER_ALL);

Getting back to how these vector drawings are used with textures, we combine the texture coordinates with one of these vector boxes. After enabling 2D textures, blending and the alpha channel, we have an important blending setting, glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE). The important factor is GL_MODULATE which multiplies the texture pixel factors being combined with the other color factors. Basically this allows all of the color settings to combine with those in the texture.

Here is a snippet using the same GL_QUADS we used for our vector box combined with the glTexCoord2f coordinates for the texture glBegin(GL_QUADS);glTexCoord2f (0.0, 0.0); glVertex3f(x0, y0 * invaspect, 0); ... glTexCoord2f (1.0, 0.0); glVertex3f(x1, y0 * invaspect, 0); glEnd().

In our FTGL text box example in our glTexEnvf function, GL_BLEND is substituted for GL_MODULATE. For the help screen, GL_DECAL is used instead which just places the texture unchanged unto the vector space, in this case because the texture is a background for the text on top. GL_DECAL is also sometimes for a texture over a mesh, as in some moveable switches and dials. However, usually when a texture is combined with a mesh, GL_ADD is used which adds the two pixel factors instead of multiplying as we did with GL_MODULATE.

All of this is to show how fundamental these settings are for getting our different drawing techniques to the screen. The above factors are the main ones you'll customize for your own OpenGL GUIs utilizing this framework. As this document expands, we'll show how to create the meshes and textures used.

Usually when explaining OpenGL, there's an elaborate explanation of the pipeline of how all these fragments combine, and the different workflows for matrices and such, but for our docs, we'll emphasize that the core of this is handled for you in a few functions. If you're changing them fundamentally, you're way out of our league; we're just happy we got the damned thing working. However, we'll touch on some basics and where they're handled in ui.c.

As mentioned before, OpenGL is all about matrices. There are several matrices that are used and each has its own stack. Here we're only going into brush on a few of them that we'll expand on in later versions of this doc. To begin, we'll discuss the model, view and projection matrices.

The model matrix keeps track of all the translations, rotations, and scaling to an object. The view matrix is how the model is looked at. The projection matrix is a perspective matrix that helps with clipping and the 3D vantage point on the objects. You can find a lot of elaborate descriptions comparing the scene using a camera if you want to understand this better.

There is a current matrix stack and we use a combination of handy dandy functions to adjust one of the matrices in this stack. Often this begins with glPushMatrix() which copies the top matrix on the stack and puts it on top. Next we often call glLoadIdentity() which makes the identity matrix current. We mangle the matrix until it does its job and then we call glPopMatrix() which returns the matrix we pushed down back to the top of the stack.

While in our method we usually start with a base identity matrix to work with which is taking all the manipulations to square one, other workflows do a lot more mangling of matrices popped off the stack. For direct multiplying of two matrices we can use the function glMultMatrix that multiplies the current matrix by the matrix passed in, but usually we use other handy dandy functions that handle these manglings for us. In the Overview we mentioned glTranslatef, and glRotatef. The other .400 hitter is glScalef, and these three handle our matrix multiplications for ojbects and even our bigger picture scene shifters. Now we can get into top level handling of matrices.

Before getting to the home run hitters, the project_mouse function is kind of a middleman used to figure out in 2D space where an object is in the full scene. The x and y position with a factor of the height and width of the scene as fx and fy are passed in as pointers and are projected using the matrix[16] from our old B3ui structure. The adjusted fy and fx can now be used to figure out what object the mouse is over. This is a key piece of our user interface puzzle.

Now we can get to the heart of the big picture matrices. This is mostly handled in the onReshape function which handles the main projection, model and view matrices. The glMatrixMode function sets the current matrix stack. Called with GL_PROJECTION the function glOrtho performs the classic parallel projection. Our rotation, translation and scale functions are called again, but this time for global, full scene manipulation with the rot and off matrices and scale factors, also in B3ui. These global rotation and scale variables can create some very funky skewing of our organ with the keystrokes they are matched with.

The invertMatrix function is called which has fun with our B3ui matrix again. This is common to help get the x,y,z coordinates used in OpenGL's universe to the reality of the world of the screen. Having worked through our projection issues we set the screen area with glViewport(0, 0, width, height).

Now we change matrix mode with glMatrixMode(GL_MODELVIEW). This is functionality that groups our model and view matrices together. We do a last glLoadIdentity to get our matrix to a starting point. Bang, the view is up to date, and a current matrix is ready for action.

To wrap up this OpenGL discussion we'll skim over how the functions are called. As usual, all roads lead to onDisplay. The first time it is called, the following functions are called to get all our OpenGL up and popping, setupOpenGL(), initMesh(ui->view), setupLight(), and initTextures(ui->view). Then an initialized variable is set to 1. For all subsequent calls to onDisplay of course the OpenGL setup is skipped, the current screen is set and if needed onReshape is called to retool our view.

So there we have it. This attempt at creating a map to sail our Hobie Cat through our OpenGL methodology without too much tact is done!