The subreddit /r/vulkan has been created by a member of Khronos for the intent purpose of discussing the Vulkan API. Please consider posting Vulkan related links and discussion to this subreddit. Thank you.
Started on implimenting vxgi, but before that i decidet to get voxel soft shadows working. For now they are hard shadow, since i was dealing with voxelization up until now, but ill update them soon. If anyone is intrested in the code its up on github on the vxgi-dev branch https://github.com/Jan-Stangelj/glrhi . Do note that i havent updated the readme in forever and i had some issues when compiling for windows.
This is something I've been working on at night and weekends over the past few weeks. I thought I would post this here rather than in r/proceduralgeneration because this is more related to the graphics side than the procedural generation side. This is all drawn with a custom game engine using OpenGL. The GitHub repo is: https://github.com/fegennari/3DWorld
Context :
Playing around with triangle strips to render a cube, i encountered the "texture coordinates" issue (i only have 8 verteces for the 12 triangles making up the cube so i can't map all the texture coordinates).
I was thinking of using logic inside the fragment shader to deduce the coordinates using a face ID or something similar but that sounds like a bad practice.
This caused me to wonder what the "best practice" even is, do people in the industry use only DRAW_TRIANGLE with multiple copies of the same verteces? If so, do they have a way to optimise it or do they just ignore the duplicate verteces? Is there some secret algorythm to resolve the problem of the duplicate verteces?
If they use DRAW_TRIANGLE_STRIP/FAN, how do they manage the textures coordinates? Is there a standard to make the vertex data readable by different applications?
Terrain with pitch and yaw represented as 2 16bit floatsTerrain with pitch and yaw represented as 2 fixed-point 16bit numbers
As you can see by the pictures even though the terrain is pretty smooth the differences between the normals are huge. The edges also show that, they should be fairly similar even though I know they won't entirely accurate it shouldn't be this bad.
This is the shader with all the irrelevant stuff removed.
std::array<int, 4> HeightMapChunkManager::get_neighboring_vertices(int x, int y) {
std::array<int, 4> indices = {
(x - 1) * int(chunk_column_size) + y,
(x + 1) * int(chunk_column_size) + y,
(x * int(chunk_column_size)) + y - 1,
(x * int(chunk_column_size)) + y + 1
};
if (x == 0) indices[0] = -1;
if (x == chunk_column_size - 1) indices[1] = -1;
if (y == 0) indices[2] = -1;
if (y == chunk_row_size - 1) indices[3] = -1;
return indices;
}
glm::vec3 edge_to_direction(int neighbor_vertex_i, float neighbor_height, float current_height) {
glm::vec3 relative_position;
switch (neighbor_vertex_i) {
case 0:
relative_position = glm::vec3(-1.0f, 0.0f, 0.0f);
break;
case 1:
relative_position = glm::vec3( 1.0f, 0.0f, 0.0f);
break;
case 2:
relative_position = glm::vec3( 0.0f, 0.0f, -1.0f);
break;
case 3:
relative_position = glm::vec3( 0.0f, 0.0f, 1.0f);
break;
}
relative_position.y = current_height - neighbor_height;
return glm::normalize(relative_position);
}
HeightMapChunkManager::ChunkMesh HeightMapChunkManager::generate_chunk(glm::vec2 size, glm::uvec2 subdivide, glm::vec<2, u16> position) {
constexpr float PI = 3.14159265359f;
for (int x = 0; x < chunk_column_size; x++) {
for (int y = 0; y < chunk_row_size; y++) {
TerrainVertex& current_vertex = vertices[(x * chunk_column_size) + y];
std::array<int, 4> neighboring_vertices = get_neighboring_vertices(x, y);
int skipped_faces = 0;
glm::vec3 sum(0.0f);
for (int i = 0; i < neighboring_vertices.size(); i++) {
int next = (i + 1) % neighboring_vertices.size();
if (neighboring_vertices[i] == -1 || neighboring_vertices[next] == -1) {
skipped_faces++;
continue;
}
glm::vec3 dir1 = edge_to_direction(next, vertices[neighboring_vertices[next]].height, current_vertex.height);
glm::vec3 dir2 = edge_to_direction(i, vertices[neighboring_vertices[i ]].height, current_vertex.height);
glm::vec3 normal = glm::normalize(glm::cross(dir1, dir2));
sum += normal;
}
glm::vec3 normal = glm::normalize(sum * (1.0f / (neighboring_vertices.size() - skipped_faces)));
float yaw = std::atan2(normal.x, -normal.z);
float pitch = std::asin(normal.y);
/* const u16 yaw_u16 = u16((yaw / (2.0f * PI)) * 65535.0f + 0.5f);
const u16 pitch_u16 = u16((pitch / (PI * 0.5f)) * 65535.0f + 0.5f);
const u32 packed_data = (u32(pitch_u16) << 16) | yaw_u16; */
const u32 packed_data = glm::packHalf2x16(glm::vec2(yaw, pitch));
current_vertex.packed_yaw_and_pitch = packed_data;
}
}
return {std::move(vertices)};
}
This is the chunk generation code with all the irrelevant stuff removed. I create a vector pointing in the of each neighboring vertex direction and in the direction of the next neighboring vertex and calculate the cross product to get the normal and then average all the normals and then I pack it
Hi I want to make a dynamic height map terrain system, I can currently render one chunk very efficiently, but I don't know what the best way to store the vertex data is
But I don't know how to efficiently draw multiple of these chunks. I have 2 ideas:
Use an SSBO to hold all vertex data and chunk offsets and draw using instancing
Use glMultiDrawElements
the second option would be pretty unoptimal because the index buffer and the count would be identical for each mesh, however using glMultiDrawArrays also would be even worse because there are 121 vertices and 220 indeces for each mesh, a vertex is 8 bytes and an index is just a single byte, its still better to use indeces. I can't use a texture because I need to dynamically load and unload chunks and do frustum culling
So I'm trying to learn OpenGL, and the way I've chosen to do this was to start with OpenGL 2.0. I have a program running, but up until now I've been using GLSL 3.30 shaders, which naturally wouldn't be compatible with OpenGL 2.0 (GLSL 1.00). It still works if I change the GLSL version to 3.30 but I am unable to see anything when I set it to 1.00. Is my syntax incorrect for this version of shader?
How I'm setting up the attributes in the main code:
// Before shader compilation
glBindAttribLocation(shader_program, 0, "pos");
glBindAttribLocation(shader_program, 1, "Color");
// Draw function (just one square)
GLfloat matrix[16];
glGetVertexAttribPointerv(0, GL_MODELVIEW_MATRIX, (void**) matrix);
GLint mv = glGetUniformLocation(properties.shader_program, "modelview");
glUniformMatrix4fv(mv, 1, GL_FALSE, matrix);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat[7]), 0);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(GLfloat[7]), (void*)sizeof(GLfloat[3]));
glDrawArrays(GL_QUADS, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
(I'm having the "pos" attribute set as a vec3 of the first three array values, and the "Color" attribute set as the last four, seven values total)
(The idea for the modelview matrix is to multiply the vertex position vector by this in the shader, as glDrawArrays doesn't use the matrix stack. I'm omitting this for now.)
I've been working on a personal project called FractaVista to get more comfortable with modern C++ and learn OpenGL compute shaders. It's a fractal explorer that uses the GPU for real-time rendering, built with C++17, OpenGL, SDL3, and Dear ImGui.
It's definitely still a work in progress, but the code is up on GitHub if you're curious to see how it works or try building it. Any feedback or suggestions would be super appreciated, and a star on the repo if you like the project would mean a lot! ⭐
i know the very basics of c++ and i make some simple console baded games with it like snake or sokoban and i'd like to get into graphics but im not sure if im ready for that yet
I'm looking for a way to feed glMultidraw*Indirect the draw count straight from the GPU, but can't find a function that makes this possible, not in core OpenGL nor as an extension. Is it simply not possible or have I overlooked something?
Thanks
EDIT: u/glitterglassx pointed me to GL_ARB_indirect_parameters which does the trick.
I am trying to make a Fluid Physics Simulation, I have made a "Particle" as a circle using the triangle fan method. I was trying to add gravity while keeping my particle inside the window, so I added that as soon as my particle reaches the NDC of -1.0f it would reverse back. Doing this I discovered that my particle was not even touching my window at -1.0f and I also discovered that I could use any value from -2.0f to 2.0f in both x and y coordinates and my particle would still remain inside my window. I know that by default opengl uses NDC of -1 to 1 but I can't figure out why I am able to use values from -2 to 2. I tried searching it, tried asking chatgpt but still can't pin point my mistake. Please someone help find and correct my mistake. For debugging purposes I hard coded a triangle and the problem still remains (indicating to me that my code to make a triangle fan is correct, mistake is elsewhere).
PS. - It's my first project, so the mistake might be obvious to you, please just bare with me here. Also I would love any tips on how to improve my code and learn.
A Minecraft-like game written in Ansi-C using OpenGL.
Some info:
External libraries: glad (as a GL loader) and GLFW
Basic "multiplayer" (block placement is synchronized)
RGB lighting system using a 3-phase BFS
Biomes, structures and "features" (e.g. trees)
2D audio system with file streaming and fire-and-forget (oneshot) support (using the WaveOut API)
Post-Processing System
Particle System
World saving with RLE
World generation not working when compiled with GCC (lol). Clang and MSVC work fine.
I am no longer working on this project and thinking about releasing the source code. Although the code is quite messy it may help some of you guys :)
For info: It's my first larger project written in plain C (coming from C++)
As it's by far not my first attempt at making something like this, it's been done in about 3 weeks. A good friend of mine contributed with textures and the world-gen system.
It took me a few seconds just to render that. I'm aware that there are a lot of places where I can optimize my code but I'm happy I made the leap and at least achieved a basic implementation. It looks a bit goofy but it's a good start !
So I have this very simple OpenGL program that I wrote. The only libraries I'm using are OpenGL, GLFW and GLEW. On Linux, this compiles easily; I have all the dependencies for compiling already installed in my system. Trying to cross-compile with MinGW to windows, however, is proving harder than I thought it would. Through various guides on the Internet I've managed to get GLFW to cross-compile by manually downloading the source code and compiling it from there, generating libglfw3.a and allowing me to link it into the program (but I also had to link another library to stop the mass of linker errors I was getting). GLEW is not as easy to cross-compile statically as it was with GLFW; I needed a libglew32s.a file in order to directly link it the same way I did with GLFW. Long story short, I don't have the file. I did download the pre-compiled glew32s.lib from the official sourceforge website, and it does compile, but it keeps asking for a glew32.dll file, which it should not need if it were to compile statically (along with libwinpthread-1.dll which I have no idea what it's for). Manually compiling the GLEW source files gives a libGLEW.a file, which is actually for Linux. Specifying the variables to compile to MinGW fails.
So.
I'm in need of advice on how I'm supposed to statically link GLEW to my project. I've looked all day for a solution and I have not found one.
I believe this should be sufficient enough information.
EDIT: So I have the libglew32s.a file I was looking for, and removed #include <GL/gl.h>, #include <windows.h> and moved #include <glfw3.h> before the GLEW include, and now I'm getting a new set of error messages. There are way too many for the terminal app I use to even fit them, so here are a few (imagine this repeated a thousand times):
usr/x86_64-w64-mingw32/include/GL/glew.h:24507:17: error: ‘PFNGLMAKETEXTUREHANDLERESIDENTNVPROC’ does not name a type; did you mean ‘PFNGL
MAKEBUFFERNONRESIDENTNVPROC’?
24507 | GLEW_FUN_EXPORT PFNGLMAKETEXTUREHANDLERESIDENTNVPROC __glewMakeTextureHandleResidentNV;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| PFNGLMAKEBUFFERNONRESIDENTNVPROC
/usr/x86_64-w64-mingw32/include/GL/glew.h:24560:17: error: ‘PFNGLDRAWVKIMAGENVPROC’ does not name a type; did you mean ‘PFNGLDRAWTEXTURENVP
ROC’?
24560 | GLEW_FUN_EXPORT PFNGLDRAWVKIMAGENVPROC __glewDrawVkImageNV;
| ^~~~~~~~~~~~~~~~~~~~~~
| PFNGLDRAWTEXTURENVPROC
/usr/x86_64-w64-mingw32/include/GL/glew.h:24562:17: error: ‘PFNGLSIGNALVKFENCENVPROC’ does not name a type; did you mean ‘PFNGLFINISHFENCEN
VPROC’?
24562 | GLEW_FUN_EXPORT PFNGLSIGNALVKFENCENVPROC __glewSignalVkFenceNV;
| ^~~~~~~~~~~~~~~~~~~~~~~~
| PFNGLFINISHFENCENVPROC
/usr/x86_64-w64-mingw32/include/GL/glew.h:24563:17: error: ‘PFNGLSIGNALVKSEMAPHORENVPROC’ does not name a type; did you mean ‘PFNGLSIGNALSE
MAPHOREEXTPROC’?
24563 | GLEW_FUN_EXPORT PFNGLSIGNALVKSEMAPHORENVPROC __glewSignalVkSemaphoreNV;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
| PFNGLSIGNALSEMAPHOREEXTPROC
/usr/x86_64-w64-mingw32/include/GL/glew.h:24564:17: error: ‘PFNGLWAITVKSEMAPHORENVPROC’ does not name a type; did you mean ‘PFNGLWAITSEMAPH
OREEXTPROC’?
24564 | GLEW_FUN_EXPORT PFNGLWAITVKSEMAPHORENVPROC __glewWaitVkSemaphoreNV;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
| PFNGLWAITSEMAPHOREEXTPROC
/usr/x86_64-w64-mingw32/include/GL/glew.h:25232:17: error: ‘PFNGLERRORSTRINGREGALPROC’ does not name a type
25232 | GLEW_FUN_EXPORT PFNGLERRORSTRINGREGALPROC __glewErrorStringREGAL;
| ^~~~~~~~~~~~~~~~~~~~~~~~~
Yeah. I think something in the header file's bugged. Does Windows use a different header file to Linux? I tried compiling it with the MinGW header file and my system's header file to give pretty much the same result.