setBfree

Preface

Overview

LV2

OpenGL

PUGL

Overview

Here are the key components of our plugin's functionality:

image/svg+xml = LV2 setBfree LV2 UI OpenGL PUGL X11

The heart of the UI is ui.c in the source code directory b_synth. Before diving into the user interface, however, we'll touch lightly on the LV2 setBfree plugin and X11 functionality.

The GUI LV2 setBfree UI plugin is an extension of the base LV2 setBfree plugin in lv2.c. These extenstions are a part of the LV2 plugin spec. The base LV2 plugin in lv2.c is the gateway to setBfree's functionality.

LV2 plugins have a central structure(as in a C coding structure). In lv2.c this structure is B3S. A structure in this structure from global_inst.h, struct b_instance *inst;, is the interface to setBfree's core functionality including struct b_reverb, struct b_whirl(leslie), struct b_tonegen(tonewheel), struct b_programme(presets), void * midicfg, and void *preamp(overdrive).

On the other end of the UI plugin is the user input. PUGL is the middleman between user keyboard or mouse clicks and this UI plugin. There is more than one option for user interfaces, but we'll be focusing on Xorg. X11(Xorg) handles graphic screen rendering and the user inputs. The code for the PUGL to and from Xorg functionality is in pugl/pugl_x11.c.

PUGL exists because our actual graphics drawing in OpenGL do nothing but draw 3D images. They have no other functionality including such basic things as views or windows. That is OpenGL is an API that contains the functionality to draw, but doesn't even know what it's rendering its graphics to. PUGL creates a view for Xorg from the OpenGL drawings, and passes user actions to the UI from Xorg. Now we can get into the heart of the setBfree LV2 UI.

The core structure of the UI plugin is B3ui in ui.c. Given its importance, here are the elements we'll be referencing throughout this document:

typedef struct {
LV2_Atom_Forge forge;
LV2_URID_Map* map;
setBfreeURIs uris;
LV2UI_Write_Function write;
LV2UI_Controller controller;
PuglView* view;
...
/* OpenGL */
GLuint * vbo;
GLuint * vinx;
GLuint texID[24]; // textures
GLdouble matrix[16]; // used for mouse mapping
double rot[3], off[3], scale; // global projection
...
int displaymode;
...
/* interactive control objexts */
b3widget ctrls[TOTAL_OBJ];
...
b3config cfgvar[MAXCFG];
int upper_key;
int lower_key;
int pedal_key;
unsigned int active_keys [5]; // MAX_KEYS/32;
bool highlight_keys;
...
} B3ui;

Another standard in LV2 functionality is an instantiate function the plugin goes to to get the ball rolling, and it'll be our point of reference for getting the B3ui structure components in action. A key function called from instantiate we'll also be alluding to often is sb3_gui_setup. Now we get into the action:

First, in instantiate we get a pointer to an instance of B3ui:

B3ui* ui = (B3ui*)calloc(1, sizeof(B3ui));

This ui is the madman in our program that keeps track of the state and data in our UI with the pointer passed around for changes, to update the view, and to pass data to the LV2 setBfree plugin.

Getting PUGL popping, the view of the app in ui is PuglView* view. In sb3_gui_setup an instance of PUGL is created and assigned to this view calling the puglCreate, ui->view = puglCreate(...).

Then, the sb3_gui_setup call, puglSetHandle(ui->view, ui) sets our ui instance for PUGL such that when a pointer to the PuglView instance is passed into a function, The ui instance can be retrieved with B3ui* ui = (B3ui*)puglGetHandle(view).

If this seems confusing, you should have seen the first several efforts of explaining how this is all connected. What's important is that we have a PUGL view that is connected to our B3ui instance ui. Now we get into the real goodies of PUGL getting it on with our LV2 UI plugin.

After the handle is set, we have a set of assignments that connect PUGL actions to functions in ui.c:

puglSetDisplayFunc(ui->view, onDisplay);
puglSetReshapeFunc(ui->view, onReshape);
puglSetKeyboardFunc(ui->view, onKeyboard);
puglSetMotionFunc(ui->view, onMotion);
puglSetMouseFunc(ui->view, onMouse);
puglSetScrollFunc(ui->view, onScroll);

Now, anytime PUGL is asked to display the app or to reshape it onDisplay, or onReshape are called. For user actions we now have onKeyboard for key presses, onMotion for mouse movements, onMouse for mouse clicks, and onScroll for scrolling. For the mouse and scrolling functions, PUGL passes in the x and y position it receives from X11.

We have a PUGL view, and when a user action occurs it's passed to a function in our UI plugin code with a reference to our ui structure. It's OpenGL time! Before getting into the graphics workflow, we have to figure out what to draw.

The current active window in the UI is held in the state variable, int displaymode. When a user action or view function is called, the ui->displaymode is queried to find out if the current window is the main organ(displaymode=0), a help screen(displaymode=1), the program presets screen(displaymode=2) and onward up to 9 for the different view options.

The OpenGL to draw, and which actions/functions to execute are determined by this displaymode current screen. This is the juncture where we'll begin how the data passed from PUGL to our app interacts with the LV2 setBfree plugin, but first we need to go over how OpenGL renders our graphics for these assorted views.

First, and VERY important is to note that the Gareus OpenGL GUI Method is much different than a standard GUI toolkit like GTK, QT etc. The graphics themselves aren't in standard widgets that are assigned some kind of callback function for each user action. It borrows from how graphics are handled in video games where a user or LV2 base plugin action occurs; the position is projected to the screen from its 3D place; then, its x,y location is compared to rendered objects that know their coordinates, and finally based on the type of object and its current state, appropriate actions or functions are executed.

This workflow creates a very different kind of a widget than those in more standard GUIs. These widgets are in an array created in our B3ui structure for every drawing that keeps track of a value, b3widget ctrls[TOTAL_OBJ]. These widgets include among other things its type, x and y position, a current value(cur) and/or a minimum(min), and maximum(max), and sometimes a bitmap image associated with it(texID).

So when a user or the setBfree base LV2 action occurs, first the program figures out the current screen kept in the displaymode variable. Then, the widget type at the x,y location is calculated and depending on the widget type, its current value is either compared to the screen value, or for on/off(min max) buttons are toggled to the opposite. Finally changed values are passed to the base plugin and/or the screen is redisplayed if the change needs to be represented in the view.

While difficult to figure out and get used to, as you do EVERYTHING with this method, it becomes powerful and flexible as because it has no constraints. OK, we have widgets that keep track of where they are on the drawing to cue the program. Now we can get to how they're drawn by OpenGL.

The widgets are made up of one or a combination of meshes and textures. Meshes are vector drawings in the file ui_model.h. While there are 33 total moveable objects, TOTAL_OBJ that have meshes, many of these are used several times, for example 18 upper and lower drawbars use the same mesh. There are also several meshes used in drawing the organ and different display panels that aren't involved in the widgets as they have no values or moving parts to keep track of.

The textures are bitmap images stored in the textures directory that can be made in any drawing program, for us methinks the Gimp. Details of mesh and texture handling is in our OpenGL section, but a broad strokes description is, they are loaded into buffers when OpenGL is initialized and then rendered from these buffers as needed. We can continue this very distilled explanation to understanding how OpenGL rolls in our program.

To make sure you don't go through the midevil torture I went through figuring this out, an important heads up is that OpenGL is a HUGE spec, and a moving target. That is, it's already complex workflow has had significant shifts from different generations, and most borrow from older and newer methods. The good news is here we have one functional workflow of OpenGL where with everything out front in our source code can be easily modified for an insane level of customizability without dealing with the basic scene set up. If these docs get effective, the learning curve'll be reduced to get up and running, without hindering taking this base anywhere you wanna go.

We'll start our general OpenGL overview pointing out that with 3D drawing you have to make a plethora of decisions of which faces in the multi dimentional space can be seen for both logistic issues with how it's viewed and rendering optimization reasons. For example opacity in 2D is taken to the extent that you decide if you can even see the faces of a multi dimensional object, and if so with what characteristics for blending that go way beyond how much the front object obscures the one behind it.

The way OpenGL handles all these decisions is you are constantly turning state variables on and off, usually with glEnable for on and then turn them off with glDisable. For all actions between these two statements, everything OpenGL does will have that characteristic. In most apps, including ours, there is a global initializing function or set of functions, and then throughout the program if you want to OpenGL to behave differently, you enable and disable features locally, where after disabling, the initialized state is restored.

As an example of how freaky this can get quickly, in our program not only do we have to jump through a lot of hoops to get our meshes drawn(these ain't just SVG vector drawings you make in Inkscape and import), but we also enable lighting. With lighting enabled you have factors dictating how and where light refracts on the mesh until you know more about the characteristics of 3D lighting than a union Off-Broadway stage hand. Then you pick the color of the light until the final mesh is a combination of its vector drawing and its associated lighting color with all its angle and absorption traits.

Putting this together for our OpenGL rendering, we initialize our OpenGL characteristics in the functions setupOpenGL, and setupLight while initMesh and initTextures set the state flags and load buffers with meshes and textures for later use.

If getting things ready to draw isn't enough fun, positioning them on the screen is yet another adventure. OpenGL doesn't mess around here; they use a base matrix and do everything with matrix multiplications and other manipulations. While I'm being coy here, the 3Dness of OpenGL necessitates this off the charts flexibility.

A heads up here is knowing which matrix manglings are different matrices being combined or manipulated directly compared to the set of handy dandy functions that perform commonly occurring matrix changing actions. For example if you want to change the position of an object, you can multiply the current matrix until you have another matrix that translates the object or you can use the glTranslatef which has a matrix multiplication in it. The point here is if you want to change the workflow we've used, be careful to figure out what each handy dandy function does exactly before mixing and matching other functions or roll your own matrix multiplications.

The above description is a little misleading because it implies that you move an object and then it is drawn. It's more accurate to say that you move the current position of the OpenGL environment and then draw there with whatever features are currently activated, or de-activated. We often begin creating a scene with glLoadIdentity(); that loads a base matrix that gets you to a starting point to start moving around from, skewing, rotating etc before drawing the object.

We'll close the OpenGL intro with a couple examples of graphics placement. Here's a snippet of code for drawing an object at a specific point, where x and y is the point, and i is the selected object, glTranslatef(ui->ctrls[i].x, y, 0.0f). And, here is an example of changing the value on an object where a dial is rotated depending on its current value, with its minimum and maximum as factors in the calculation, glRotatef(240.0 - (360.0 * rint(ui->ctrls[i].cur - ui->ctrls[i].min) / (1.0 + ui->ctrls[i].max - ui->ctrls[i].min)), 0, 0, 1).

Much of the above overview may seem obvious or subtle, but this is how I wish I had approached figuring out our OpenGL before realizing I shouldn't even be in the same mindset I was in using 2D drawing apps and GUI libraries. The OpenGL section goes through the entire puppet show of all the functions and entire workflow used, but for this overview we have enough of a sense of what the OpenGL is doing to get back to our UI plugin getting it on with its base LV2 setBfree plugin.

In the LV2 sections of this doc we're really going to go into extreme detail on its workings because we are involved directly in that project and these docs will follow and expand on their core docs ad nauseum. Here is the absolute skeleton of what's going on:

From the B3ui structure starting with LV2_Atom_Forge forge an atom is any object for data while forging creates an atom in a buffer. The LV2 setBfree plugin transfers data to the UI extension through buffers both have access to. Buffer functionality is handled through ports. The data for our two plugins is both midi data used for control, and the actual audio data.

When data is passed between the two plugins the URI/URID feature of LV2 is used to inform what kind of data action is occuring. A URI in LV2 is an address and URID is an integer mapping to a URI. For our purposes here, we'll simplify this to saying an integer is mapped to a type of functionality. The setBfree LV2 and UI plugin pass these URIs to each other when a data action occurs to tell each other what kind of data and action is going on. In our UI plugin these URI mappings in B3ui are a structure of uris, setBfreeURIs uris;.

The LV2UI_Write_Function write, a function pointer, works with the LV2UI_Controller controller to coordinate data passed between the UI and the host setBfree plugin. Again these are both passed into instantiate and assigned to our ui. Now we have the needed pieces for examples of data communication for our two plugins:

Going from the UI plugin to its LV2 setBfree parent we have ui->write(ui->controller, 0, lv2_atom_total_size(msg), ui->uris.atom_eventTransfer, msg). So write is essentially our callback, told to use our controller for an atom_eventTransfer type of data transfer.

Going the other way from the LV2 setBfree plugin to our UI is in the port_event function in ui.c where several different atom types are tested against the ui->uris and depending on which uris ii matches, different variables are assigned values, and functions are called to notify the GUI of the changes.

So that's that! We've gotten from our X11 user input through PUGL to OpenGL to the UI interface for its LV2 parent plugin that gets us to setBfree. There are a few leaps of faith in this over distillation, but we've now got a point of reference for how these pieces fit together for the more detailed explanation sections, whew.