Following on from the minimal start-up last time, it's useful to get some statistics on what the computer's video driver can handle, and specify some additional parameters for start-up.
We can start GLFW in the same way as before, but add some extra checks. This will tell us if we've made a mistake such as calling a GLFW function with the wrong parameters. Before initialising GLFW, we can set up an error callback, which we can use to spit out some error information, then exit the program. We create a little function for the callback:
void error_callback_glfw(int error, const char* description) { fprintf( stderr, "GLFW ERROR: code %i msg: %s.\n", error, description ); }
int main() { printf( "Starting GLFW %s.\n", glfwGetVersionString() ); // Register the error callback function that we wrote earlier. glfwSetErrorCallback( error_callback_glfw ); // Start GLFW. if( !glfwInit() ) { fprintf( stderr, "ERROR: could not start GLFW3.\n" ); return 1; } ...
Before creating a window with GLFW, we can give it a number of hints to set specific window and OpenGL settings. Our primary reason for doing this is to force OpenGL to use at least the minimum version of OpenGL that we are writing our code to support. For example; if we're using tessellation shaders, a feature from OpenGL 4, then we should probably stop the program from running if the drivers can't support OpenGL 4.
glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 4 ); glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 1 ); glfwWindowHint( GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE ); glfwWindowHint( GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE );
You can also use the hinting system to enable things like stereoscopic rendering, if supported by your hardware. See the GLFW Window Guide for more information.
Whilst we're adding window hints, it's good to know that we can put anti-aliasing hints here too. Even if our textures and colours are nicely filtered, the edges of our meshes and triangles are going to look jagged when drawn diagonally on the screen - you'll see the rectangular pixels along the diagonal edges of your triangle. OpenGL has a built-in "smoothing" algorithm that blurs over these parts called Multisample Anti-aliasing (MSAA). The more colour samples it takes around polygon edges, the more smoothed the resulting edge pixels will look, but it also gets more expensive. Set it to "16" before taking screen captures! You should see a difference in your triangle edges if you change this value - the default is 0, or off.
glfwWindowHint( GLFW_SAMPLES, 8 );
To change the resolution, or start in a full-screen window, we can set the parameters of the glfwCreateWindow function. To use full-screen mode we need to tell it which monitor to use, which is a new feature of GLFW 3.0. You can get quite precise control over what renders on the different monitors, which you can read about in GLFW's online documentation. For now, we will use your primary monitor for full-screen mode.
You can ask GLFW to give you a list of supported resolutions and video modes with glfwGetVideoModes(), which will be useful for supporting a range of machines. For full-screen we can just use the current, desktop, resolution, and change our glfwCreateWindow call:
// Set this to false to go back to windowed mode. bool full_screen = true; // NB. include stdbool.h for bool in C. GLFWmonitor *mon = NULL; int win_w = 800, win_h = 600; // Our window dimensions, in pixels. if ( full_screen ) { mon = glfwGetPrimaryMonitor(); const GLFWvidmode* mode = glfwGetVideoMode( mon ); // Hinting these properties lets us use "borderless full screen" mode. glfwWindowHint( GLFW_RED_BITS, mode->redBits ); glfwWindowHint( GLFW_GREEN_BITS, mode->greenBits ); glfwWindowHint( GLFW_BLUE_BITS, mode->blueBits ); glfwWindowHint( GLFW_REFRESH_RATE, mode->refreshRate ); win_w = mode->width; // Use our 'desktop' resolution for window size win_h = mode->height; // to get a 'full screen borderless' window. } GLFWwindow *window = glfwCreateWindow( win_w, win_h, "Extended OpenGL Init", mon, NULL );
Now we can run in full-screen mode! It's a little bit tricky to close the window though (ALT+F4) - you might want to look at implementing GLFW's keyboard handling to allow an escape key to close the window. Put this inside your rendering while loop, after the glfwPollEvents() call:
if (GLFW_PRESS == glfwGetKey(window, GLFW_KEY_ESCAPE)) { glfwSetWindowShouldClose(window, 1); }
Remember that our loop ends when the window is told to close. You'll find a list of all the key codes, and other input-handling commands in the documentation on the GLFW website. You can also detect if the mouse was clicked, get the mouse coordinates inside the window, and even use gamepads and joysticks.
If you're running in a window, then you'll want to know when the user resizes the window, or if the system does, for example if the window is too big and needs to be squished to fit the menu bars. You can then adjust all your variables to suit the new size. GLFW has callbacks available for this, but I find it more handy to retrieve the window and rendering area (framebuffer) dimensions every time I draw a frame - on every iteration of the rendering while loop. Let's disable full screen mode, and give this a try. We will call glfwGetWindowSize() inside our rendering loop to update our window width and height variables.
You'll notice that, if you resize your window, the cleared background colour fills the window, but the triangle is drawing at the same size - it's not scaling up to fill a larger window. We need to update the OpenGL viewport size to the new window size. Let's update our rendering loop:
while ( !glfwWindowShouldClose( window ) ) { glfwPollEvents(); if ( GLFW_PRESS == glfwGetKey( window, GLFW_KEY_ESCAPE ) ) { glfwSetWindowShouldClose( window, 1 ); } // Check if the window resized. glfwGetWindowSize( window, &win_w, &win_h ); // Update the viewport (drawing area) to fill the window dimensions. glViewport( 0, 0, win_w, win_h ); // Wipe the drawing surface clear. glClearColor( 0.6f, 0.6f, 0.8f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glUseProgram( shader_program ); glBindVertexArray( vao ); glDrawArrays( GL_TRIANGLES, 0, 3 ); // Put the stuff we've been drawing onto the visible area. glfwSwapBuffers( window ); }
If you want to know how fast your program is running, you can count how many times per second your while loop iterates. This is often referred to as frames per second or FPS. You may find that the inverse, milliseconds per frame, is more intuitive for measuring the performance of your program. We can work these numbers out and write the result into the title bar of the window.
double prev_s = glfwGetTime(); // Set the initial 'previous time'. double title_countdown_s = 0.1; while ( !glfwWindowShouldClose( window ) ) { double curr_s = glfwGetTime(); // Get the current time. double elapsed_s = curr_s - prev_s; // Work out the time elapsed over the last frame. prev_s = curr_s; // Set the 'previous time' for the next frame to use. // Print the FPS, but not every frame, so it doesn't flicker too much. title_countdown_s -= elapsed_s; if ( title_countdown_s <= 0.0 && elapsed_s > 0.0 ) { double fps = 1.0 / elapsed_s; // Create a string and put the FPS as the window title. char tmp[256]; sprintf( tmp, "FPS %.2lf", fps ); glfwSetWindowTitle(window, tmp ); title_countdown_s = 0.1; } ...
GLFW has a function glfwGetTime which gives us a double-precision floating point number, containing the number of seconds since GLFW started. The basic process is this:
You may prefer to render this as text on the screen, but we don't have the functionality to do that just yet.
Graphics programmers tend to measure time in milliseconds for performance comparisons, so you might like to put the frame time in milliseconds next to, or instead of, the frames per second count.
You might notice that your frame rate is more or less exactly tied to your monitor's refresh rate e.g. 60 FPS or 120 FPS. What's happening here is that, at the end of your drawing loop, the glfwSwapBuffers() call is waiting until your monitor is ready to refresh before swapping its back buffer, the hidden 2D image that we draw our triangle to, with the visible front buffer, which we see displayed in our window. The delay is to prevent any visible tearing of the display while the swap is happening. When you're analysing the performance of your program, or want to minimise latency at the cost of visual quality, then you can disable this by calling glfwSwapInterval( int interval ), where the parameter is the number of screen updates to wait for before swapping the buffers.
glfwSwapInterval( 0 ); // The value of 0 means "swap immediately". glfwSwapInterval( 1 ); // Lock to normal refresh rate for your monitor.
Keep in mind that the frame rate is not an objective measure of how fast your code is. You can run, but can't display, faster than the refresh rate of your monitor (e.g. 60 Hz or 60 FPS). Rendering at 200 Hz is therefore not beneficial in the same way as it would be to e.g. game logic, which could make use of more time steps per second - more detailed movement paths and stuff. Fast GPU clocks will give you huge numbers when drawing nothing. This doesn't really mean anything that you can compare against until you start drawing a more involved scene.
Measuring frame rate or time is useful when optimising more complex scenes. If you are drawing at 30 FPS in a game with fast-moving animations it will be noticeably jerky looking, but it might be okay in an animated chess game. You can use the FPS counter to improve rendering techniques to get it back to whatever your program's reasonable level is. Remember that frame rate is dependent on your particular hardware configuration - you want to look at frame rate on your minimum spec machine. That said, on any machine, it can give you a good idea of which techniques are relatively more GPU hungry than others.
It's easy to make mistakes writing shader programs. It's a good first step to start collecting, and printing errors when they aren't working. We skipped this in Hello Triangle to keep the code shorter, but let's add it now. After each call to glCompileShader(), we have 2 of them, we can check for compilation errors using glGetShaderiv(). If we find there are errors, we can print the compilation log with glGetShaderInfoLog(), which should tell us what the error is, and what line number of the shader the error was on.
// This is our existing vertex shader compilation code. GLuint vs = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vs, 1, &vertex_shader, NULL ); glCompileShader( vs ); // After glCompileShader check for errors. int params = -1; glGetShaderiv( vs, GL_COMPILE_STATUS, ¶ms ); // On error, capture the log and print it. if ( GL_TRUE != params ) { int max_length = 2048, actual_length = 0; char slog[2048]; glGetShaderInfoLog( vs, max_length, &actual_length, slog ); fprintf( stderr, "ERROR: Shader index %u did not compile.\n%s\n", vs, slog ); return 1; } // Repeat the above check for the fragment shader next.
After adding that check for both your shaders, try deliberately making a typo in your shader code. Run your program, and you should get a much more useful compiler printout of what your bug is, and which line of the shader it's on.
Errors can also occur in the linking stage. This won't affect us until we have more complex shaders, but we will see in the next article that it's possible to output a variable from your vertex shader to your fragment shader. If the output/input data types don't match we'll get a linker error that this check will catch. We can put this check in now too. It goes after our call to glLinkProgram().
// Our existing shader program linking code: GLuint shader_program = glCreateProgram(); glAttachShader( shader_program, fs ); glAttachShader( shader_program, vs ); glLinkProgram( shader_program ); // Check for linking errors: glGetProgramiv( shader_program, GL_LINK_STATUS, ¶ms ); // Print the linking log: if ( GL_TRUE != params ) { int max_length = 2048, actual_length = 0; char plog[2048]; glGetProgramInfoLog( shader_program, max_length, &actual_length, plog ); fprintf( stderr, "ERROR: Could not link shader program GL index %u.\n%s\n", shader_program, plog ); return 1; }
You can see that it would make sense here to write a couple of helper functions for loading and checking your shaders. My advice is to try to keep this very simple - really just 1 or 2 small functions.
When we called glfwTerminate() it cleaned up all the resources we created in our Hello Triangle demo. But, what if we are creating a larger program and want to clean up as we go to make space for other resources?
We will look at shaders in more detail in the next article, so our extra error logs will come in really handy. We've started to get some insight into how OpenGL works, and how to debug some issues.
For a really insightful look at what the state machine is doing, the RenderDoc tool is a fantastic debugging companion to application development. It might not be clear what it's doing just yet, but when we look at the different hardware and shader pipeline stages it will become very useful.