Hey! raylib forum is closing!

After a year with not much movement in the forum I decided to close it.

The cost of maintaining the hosting is a bit high for the usage of the platform, updating the forum and managing it is also a bit cumbersome and the truth is that I'm already trying to maintain multiple other networks pretty more active than this forum.

I'll recommend you to move to the other raylib networks:

- For a forum style, use Reddit: https://www.reddit.com/r/raylib/
- For direct messaging and questions, use Discord: https://discord.gg/VkzNHUE
- To report issues, use GitHub: https://github.com/raysan5/raylib

- Also, remember you can contact me personally on Twitter: https://twitter.com/raysan5 or directly by mail to ray[at]raylib.com

If you feel generous, you can also contribute to the project on my Patreon: https://www.patreon.com/raylib

Thanks to you all for joining the project and help to improve it with your questions. Keep it up! :)

Best Regards,


PD. Closing will be effective by October 31th.

Performance quirk with BeginTextureMode()

On a RaspberryPi 3 i have been using BeginTextureMode() to draw a framebuffer and had it running inside of a game loop with 2d and 3d graphics in it. However i noticed that as soon as i did anything to the framebuffer (Even as simple as writing a single character of text) made it slow down so it could never keep 60 fps.

I toggled a GPIO pin in the game loop so that i could time how long things take using a oscilloscope. However i soon found out that OpenGL does everything at its own pace at the end of the frame, including waiting tor frame sync so it didn't tell me much. To fix that i added a call to glFinish() right after and that let me measure frame render time.

I tried drawing the framebuffer both before everything else or between the BeginDrawing() EndDrawing() and it had no effect. Also found that calling glFinish() between those causes graphical glitches. However when i called glFinish() after i was done with drawing my framebuffer and before BeginDrawing() the performance came back to a solid 60fps. Infact if i looped my framebuffer drawing function 10 times, but still resulted in a higher FPS than if i called it just once without the flFinish trick. Even if the 10 passes draw lots of textures while the single pass draws a single text character its still faster to do those 10 passes.

Later on i found out that glFlush() works just as well while not halting the program so the CPU can be used to do useful stuff in that time.

Does anyone have an explanation of why performance would drop so badly when using a framebufer and why glFlush() somehow fixes it? The hack of calling glFlush myself works fine for me, but im putting it out there in case someone else runs in to the same issue.


  • edited May 2017
    Wow, that sounds weird... are you using the following drawing structure?

    BeginTextureMode(target); // Enable drawing to texture
    DrawModel(dwarf, position, 2.0f, WHITE); // Draw 3d model with texture
    DrawText("2D DRAWING", 70, 190, 50, RED);
    EndTextureMode(); // End drawing to texture

    // Draw texture from target
    DrawTextureRec(target.texture, (Rectangle){ 0, 0, target.texture.width, -target.texture.height }, (Vector2){ 0, 0 }, WHITE);
    DrawFPS(10, 10);

    Please, could you share some sample to check this issue? Maybe is a RPI specific issue...
  • I have tried relocating my BeginTextureMode in to diferent locations. The best luck was calling BeginTextureMode(); then glFlush(); and then BeginDrawing() Where i draw to the screen.

    This was tested on a RaspberryPi 3 running the latest kernel with the Pixel GUI.

    I cut down my code to only graphics stuff and zipped it up:

    In my testing leaving glFlush() in there gives 60 fps while commenting it out makes the fps dance between 42 and 55 fps. Tho i have no clue how fast it actually is when running 60 since it appears the RPi version always forces Vsync.

    Its not like its a huge issue or anything, i just found it weird that glFlush could boost performance like that. But im probably giving the GPU a ton of work drawing this many things in a scene.
  • Hi Berni, just review your code a bit, notice that any drawing should be done always between BeginDrawing() and EndDrawing().

    glFlush() just forces the execution of drawing commands, usually that's controlled by the GPU driver in the most proper way... it's better to just let OS to swap buffers internally by the used graphics system (and do the glFlush()/glFinish() at that point). In raylib it happens at EndDrawing(), together with inputs polling and time waiting (if required).

    Here it is your code reviewed: https://github.com/raysan5/raylib/issues/276

    Just opened a github issue for discussion on the topic.
  • edited May 2017
    Yes i noticed it afterwards that examples show it being done inside the BeginDrawing() and EndDrawing(). At first i assumed that i would mess up the canvas since it has to set up everything for drawing into the texture. So to try and avoid touching the screens canvas i did it before BeginDrawing(). After looking at the source code of raylib i realised that the EndTextureMode(); does actually restore the screen canvas so drawing can continue on it like nothing ever happened. I haven't noticed any difference in operation when putting it inside or outside, so i just assumed that raylib does not care where it is done. Maybe i was just too lazy to read the documentation properly and missed this pointed out.

    Oh and about the glBlendFuncSeparate() call i did. That was to fix a quirk i found with the default alpha blending mode used in raylib. It does alpha blending as it should on the RGB channels, but it also alpha blends the A channel the same way.

    This means if you draw a texture with a pixel that has a alpha of 128 on top of the a completely opaque canvas with alpha 255 the two mix together and results in a pixel with 128 drawn to the canvas. This usually does not matter since the screen canvas ignores the alpha channel anyway. However when drawing to a framebuffer this alpha value is then used when drawing the framebuffer to the screen canvas. So if you take an example, You create a framebuffer and draw an opaque texture with all alpha 255 across the entire framebuffer, then you draw another transparent texture on top of it so it blends in nicely with the image in the background. Then you take the framebuffer and draw it on to the screen canvas on a location that already has something in the background. What happens is you can see some of the canvas background trough where the original transparent image was since the framebuffer had a non 255 alpha there.

    The plane demo can reproduce this if you draw the framebuffer with a tint of WHITE to keep it from being transparent. Now the artifical horizon indicator in the corner is fully opaque so you can't see the plane behind it trough it. However if you look closely at the edges around the white lines and text you can see the planes wing behind it. This is because the edges are softened using alpha transparency and are drawn on top as a separate texture (Gets worse if antialiased), check the images in the resources folder for reference. This can also be shown by setting the background to be bright red as you will see red around all the white text.

    So the way my fix for that works by calling "glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);" That it still mixes the RGB channels by the classic alpha blending rules, however the alpha channel is using additive blending. This means a transparent texture can never make a already opaque pixel back to transparent as the alpha value can only go up, never down.

    I think this is actually a quirk with OpenGL rather than raylib.
Sign In or Register to comment.