[index]

Extended Initialisation

So, following on from the minimal start-up last time, it's useful to get some statistics on what the computer's video driver can handle, and specify some additional parameters for start-up. Our GL programme will dump a lot of information, which can be a little infuriating to pick through when there's a problem, so we'll start by setting up a log file for capturing GL-related output.

Starting a Log File

Debugging graphical programmes is a huge pain. They don't have functions to print text to the screen any more, and having a console open on the side printing things out can quickly get overwhelming. I strongly suggest starting a "GL log" straight away, so you can load it up to check out what specifications a user's system has, and also debug any problems after the programme has finished.

The actual structure of your log functions will depend on your preferences. I prefer C fprintf() to C++ output streams so I'm going to make something that takes a variable number of arguments like printf() does. You might prefer to stream variables out, cout style. To make functions that take a variable number of arguments; #include <stdarg.h>.

This first function just opens the log file and prints the date and time at the top - always handy. It might make sense to print the version number of your code here too. In GCC this would be the built-in strings __DATE__ and __TIME__. Note that after printing to the log file we close it again rather than keep it open.

This function is the main log print-out. The "..." parameter is part of C's variable arguments format, and lets us give it any number of parameters, which will be mapped to corresponding string formatting in the message string, just like printf(). We open the file in "a[ppend]" mode, which means adding a line to the existing end of the file, which is what we want, because we just closed it again since we wrote the time at the top. C has a rather funny-looking start and end function for processing the variable arguments. After writing to the file, we close it again. Why? Because if the programme crashes we don't lose our log - the last appended message can be very enlightening.

I wrote a slight variation of the log function, specifically for error messages. It's the same, but also prints to the stderr terminal. I usually run OpenGL with the terminal open. If I print my error message to stderr it should pop up as soon as it occurs, which can make it obvious when something has gone wrong.

Start GLFW Again

Error Checks

We can start GLFW in the same way as before, but add some extra checks. This will tell us if we've made a mistake such as calling a GLFW function with the wrong parameters. Before intialising GLFW, we can set up an error callback, which we can use to spit out some error information, then exit the programme. We create a little function for the callback:

This will tell is if there was a special problem intialising GLFW. I also put in an assert () to make sure that the log file could be opened. If you want to use this then you'll need to include assert.h.

Setting a Minimum OpenGL Version to Use

Before creating a window with GLFW, we can give it a number of "hints" to set specific window and GL settings. Our primary reason for doing this is to force OpenGL to use at least the minimum version of OpenGL that we are writing our code to support. For example; if we're using tessellation shaders, then we should probably stop the programme from running if the drivers can't support OpenGL 4. If you're using a Mac then this step is necessary - only a limited set of OpenGL implementations are available; 4.1 and 3.3 on Mavericks, and 3.2 on pre-Mavericks. These are also limited to a "forward-compatible, core profile" context - the most conservative set of features with no backwards-compatibility support for features that have been made obsolete. To request the newest of these three versions, we "hint" to the window-creation process that we want OpenGL 3.2 forward-compatible core profile. Yes we can put "3.2", even if we are on Mavericks:

This should make us Mac-compatible. In the previous version of GLFW I had to manually hint the number of bits to use for depth and colour channels on Mac, but it looks like GLFW3 has sensible defaults instead. The "forward compatible" profile disables all of the functionality from previous versions of OpenGL that has been flagged for removal in the future. If you get the OpenGL 3.2 or 4.x Quick Reference Card, then this means that all of the functions in blue font are not available. This future-proofs our code, and there's no other option on Mac. The "compatibility" profile doesn't flag any functions as deprecated, so for this to work we also must enable the "core" profile, which does mark deprecation. Check your GL version printout. Mine now says:

OpenGL version supported 3.2.11903 Core Profile Forward-Compatible Context

You can also use the hinting system to enable things like stereoscopic rendering, if supported by your hardware.

Anti-Aliasing

Whilst we're adding window hints, it's good to know that we can put anti-aliasing hints here too. Even if our textures and colours are nicely filtered, the edges our our meshes and triangles are going to look harsh when drawn diagonally on the screen (we'll see pixels along the edges). OpenGL has a built-in "smoothing" ability that blurs over these parts called multi-sample anti-aliasing. The more "samples" or passes it does, the more smoothed it will look, but it gets more expensive. Set it to "16" before taking screen shots!

Window Resolution and Full-Screen

To change the resolution, or start in a full-screen window, we can set the parameters of the glfwCreateWindow function. To use full-screen mode we need to tell it which monitor to use, which is a new feature of GLFW 3.0. You can get quite precise control over what renders on the different monitors, which you can read about here. We can just assume that we will use the primary monitor for full-screen mode.

You can ask GLFW to give you a list of supported resolutions and video modes with glfwGetVideoModes() which will be useful for supporting a range of machines. For full-screen we can just use the current resolution, and change our glfwCreateWindow call:

Now we can run in full-screen mode! It's a little bit tricky to close the window though - you might want to look at implementing GLFW's keyboard handling to allow an escape key to close the window. Put this at the end of the rendering loop:

Remember that our loop ends when the window is told to close. You'll find a list of all the key codes and other input handling commands at http://www.glfw.org/docs/latest/group__input.html.

You'll notice GLFW 3.0 has functions for getting and setting the gamma ramp of the monitor itself, which gives you much more control over the range of colours that are output, which was kind of a pain to do before. This is more of an advanced rendering topic so don't worry about this if you're just starting.

If you're running in a window then you'll want to know when the user resizes the window, or if the system does (for example if the window is too big and needs to be squished to fit the menu bars). You can then adjust all your variables to suit the new size.

Then we can call: glfwSetWindowSizeCallback (window, glfw_window_size_callback);

You'll notice that if you resize your window that the OpenGL part doesn't scale to fit. We need to update the viewport size. Put this in the rendering loop, just after the glClear () function:

Printing Parameters from the GL Context

After intialising GLEW we can start to use the GL interface - glGet to print out some more parameters. Most of this information is from previous incarnations of OpenGL, and is no longer useful. Some information is going to be really useful to determine the capabilities of the graphics hardware - how big textures can be, how many textures each shader can use, etc. We can log that here. I called this function right after where I log the GL version being used.

Now, if we have a look at the log file after running this (call it after the window creation). My log says:

This tells me that my shader programmes can use 32 different textures each - lots of multi-texturing options with my graphics card here. I can access 16 different textures in the vertex shader, and 16 more in the fragment shader. My laptop can support only 8 textures, so if I want to write programmes that run nicely on both you can see that I would make sure that they don't use more than 8 textures. In theory my texture resolution can be up to 16384x16384, but the actual memory available doesn't fit quite this much at once. I might get away with 8224x8224x8 bits (more than 5GB). The max uniform components means that I can send tonnes and tonnes of floats to each shader. Each matrix is going to use 16 floats, and if we're doing hardware skinning we might want to send a complex skeleton of 256 joints - 4096 floats to the vertex shader. So we can say that we have plenty of space there. Varying floats are those sent from the vertex shader to the fragment shaders. Usually these are 3d vectors, so we can say that we can send around 40 vectors between shaders. Vertex attributes are variables loaded from a mesh e.g. vertex points, texture coordinates, normals, per-vertex colours, etc. GL means 4d vectors here. I would struggle to come up with more than about 6 useful per-vertex attributes, so no problem here. Draw buffers is useful for more advanced effects where we want to split the output from our rendering into different images - we can split this into 8 parts. And, sadly, my video card doesn't support stereo rendering.

Monitoring the GL State Machine

If you look at the list for glGet you will see plenty of state queries; "the currently enabled buffer", the "currently active texture slot" etc. OpenGL works on the principle of a state machine. This means that once we set a state (like transparency, for example), it is then globally enabled for all future drawing operations, until we change it again. In GL parlance, setting a state is referred to as "binding" (for buffers of data), "enabling" (for rendering modes), or "using" for shader programmes.

The state machine can be very confusing. Lots of errors in OpenGL programmes come from setting a state by accident, forgetting to unset a state, or mixing up the numbering of different OpenGL indices. Some of the most useful state machine variables can be fetched during run-time. You probably don't need to write a function to log all of these states, but keep in mind that, if it all gets a bit confusing, you can check individual states.

Frame Rate Counter

We can add our familiar drawing (clearing) loop, but at the top call a _update_fps_counter() function which will update the title bar of our window with the number of times this loop draws per second.

So, GLFW has a function glfwGetTime which gives us a double-precision floating point number, containing the number of seconds since GLFW started. A lot of OpenGL tutorials use the (notoriously inaccurate) GLUT microseconds timer, but this one seems to do the trick. I didn't want to have floating global variables keeping track of the previous time, so I made it a static double. Working out the frame rate every frame is too accurate, and will incur a small cost as well. So I average the rate accrued over 0.25 seconds (or thereabouts - I assume it never gets slower than 4 frames per second). This gives me a readable result that is still responsive to changes in the scene. I use the glfwSetWindowTitle function to put the rate in the title bar. You may prefer to render this as text on the screen...but we don't have the functionality to do that just yet.

Keep in mind that the frame rate is not a linear, reliable, measure of how fast your code is. You can't draw faster than the refresh rate of the monitor (around 70Hz or 70fps). Rendering at 100Hz is therefore not beneficial in the same way as it would be to game logic, which can then compute more time steps per second (more detailed movment paths and stuff). Fast GPU clocks will give you huge numbers when drawing nothing. This doesn't really mean anything that you can compare until you start drawing a more involved scene. Measuring frame rate is useful when optimising more complex scenes. If you are drawing at 30fps in a game with fast-moving animations it will be noticeably bad, but it might be okay in a slightly slower-paced game. You can use the fps counter to improve rendering techniques to get it back to whatever your programme's reasonable level is. Remember that frame rate is dependent on your particular hardware configuration - you want to look at frame rate on your "minimum spec" machine. That said, on any machine, it can give you a good idea of which techniques are relatively more GPU hungry than others.

Remember the maxim: never optimise early. That's why I put my optimisation articles at the very end of the list. You can waste all of your programming time trying to get your fps counter to go as high as possible. After this discussion we should now appreciate that this is dumb, because we won't even notice the difference - only put work into improving your frame-rate when it's going way too slowly; otherwise don't bother.

Extending Further

When we look at shaders next, we will log a lot more information. Lots of bugs will come from mixing up uniforms and attributes sent to shaders, so we will dump all of those identifiers to a log as well.

Problem?

Remember to link GL, glfw, and GLEW. My link path looks like this:

g++ -o demo main.cpp -lglfw -lGLEW -lGL

Remember to initialise GLFW first, then do any parameter setting, then create the window, then start GLEW, then start the drawing loop. Parameter fetching code can go just about anywhere.

Include the GLEW header file before GLFW.