Vertex Attribute Abstraction Design

Started by
6 comments, last by Charles117 1 year, 5 months ago

Hi everyone,

I'm trying to make a 3D rendering engine just for learning purposes (it would be nice if it evolves into a portfolio project for landing a rendering engineer job) but I just can't figure out how to properly abstract vertex attributes. To begin with, I've seen that vertex attributes have the following characteristics:

  • There are many types of vertex attributes (e.g. positions, normals, texture_coords, etc.).
  • A mesh may not have all of the attributes (e.g. it may not have a color attribute for instance).
  • There are many ways of representing them (e.g. array of float, array of struct, etc).
  • They can be static, stream, and dynamic attributes.
  • They map to an attribute in a shader.
  • Been able to retrieve position data is desirable for making AABB.

So far I've come up with the following designs:

Block-Based Vertex Attribute Storage

In this design there's a class that manages all the possible attributes that a mesh can have. Also, it produces a list of AttributeDescriptors that can be used to load the attribute data into OpenGL easily (i.e. allocate a buffer big enough to hold all the data and then load it using buffer sub data functions).

struct AttributeDescriptor
{
	// E.g. 3 if its a vec3.
	uint_8 num_cmp;
	// how many attributes of this type are there.
	size_t size;
	// E.g. AttributeType::FLOAT 
	AttributeType type;
	// E.g. AttributeName::POSITION
	AttributeName name;
	// E.g. AttributeUsage::STATIC;
	AttributeUsage usage;
	void* data;
};

class VertexBuffer
{
public:
	...

	void addPosition(const glm::vec3& position);
	void addTextureCoord(const glm::vec2& coord);	
	...

	size_t getBufferSize() const noexcept;
	std::vector<AttributeDescriptors> getAttributeDescriptors() const noexcept;

private:
	std::vector<glm::vec3> m_positions;
	std::vector<glm::vec2> m_texture_coords;
	...
};

Pros:

  • If a mesh doesn't have some attributes its underlying vector on the class will be empty.
  • Building attribute descriptors are easy to build and mostly static.
  • Fetching position data is easy.
  • Building a VertexBufferObject in OpenGL is straight forward.
  • Attributes can be updated quickly when they are dynamic (e.g. setPosition(…)).
  • Can be stored directly in a Mesh instance (i.e. no need to use a pointer).

Cons:

  • Attributes are static (?), this is, if the class only supports positions, normals, and texture coords then adding a new attribute (e.g. bone weights) is painful/needs refactoring (violation of Open-Closed principle (?)).
  • Forces the AttributeDescriptor to contain a pointer to the data.
  • Everything is coupled to a single implementation.

Interface-Based Implementation

The idea of this design is that we abstract common buffer operations into an interface, each implementer manages one possible buffer representation (e.g. an array of floats), AttributeDescriptor allows OpenGL to properly setup the VBO.

struct AttributeDescriptor
{
   uint_8 num_cmp;
   // Like the OpenGL offset
   size_t offset;
   AttributeType type;
   std::string name;
};

struct VertexBuffer
{
   virtual ~VertexBuffer() noexcept = default;

   virtual size_t getBufferSize() const noexcept = 0;
   virtual size_t getStrideSize() const noexcept = 0;
   virtual void* getDataPtr() noexcept = 0;
   virtual std::vector<glm::vec3> getPositions() const noexcept = 0;
   virutal std::vector<AttributeDescriptor> getAttributeDescriptors() const noexcept = 0;
}

/*
Concrete implementation of a vertex buffer type based on a std::vector<float>.

In this case an user can supply std::vector<float> with the attribute information:

    std::vector<float> attributes = {
      // Positions      // Texture Coords
    1.0f, 1.0f, 1.0f,      1.0f, 1.0f,
    ...
    };
*/
class SimpleVertexBuffer: public VertexBuffer
{
   ...
}

Pros:

  • Doesn't constrain what attributes can be supplied.
  • No space wasted when a mesh doesn't have a set of attributes.
  • More flexible implementation (E.g. use structs, single float vector, etc.).

Cons:

  • Depending on the implementer of the interface getting position attributes can be painful or easy.
  • The buffer has to be stored in a Mesh using a pointer thus breaking cache usage on a cache-friendly architecture (e.g. ECS).
  • Users now have to carefully define Attribute Descriptors and supply them (painful to validate them on implementers).
  • Forces a single pointer for loading the data (everything is packed into a single vector).

Does anybody can give me feedback on this topic? Design suggestions are welcome too.

Thanks in advance.

Advertisement

I do things very similar to your first example, except that my VertexBuffer is called VertexBufferSet, and instead of a fixed set of attributes, it is a set of arbitrary buffers that each have an AttributeName. This avoids the main “con” in your post. The VertexBufferSet is managed and owned by the Mesh class. A mesh can have one or more VertexBufferSets, which are referenced by index from each MeshGroup (one for each material).

For each material (i.e. shader program), I determine all of the attributes it has as inputs when it is compiled, and store those in a ShaderBindingSet. The information stored is very similar to your AttributeDescriptor. This ShaderBindingSet also contains a description of the texture and constant uniforms for the shader program. The ShaderBindingSet is initialized with default values for constants when the program I/O bindings are defined, and typically nullptr for buffers and textures.

Then, when rendering a mesh, the renderer determines the final set of shader inputs. It iterates over the bindings in the ShaderBindingSet, and then provides buffers from the Mesh's VertexBufferSet to the shader, using the AttributeName to match mesh buffers to shader inputs. It also provides to the shader any textures and uniform constants that are part of the mesh. Mesh also has a TextureSet and ConstantSet, which are basically lists of TextureName→Texture* and ConstantName→value pairs, similar to the VertexBufferSet. These allow a Mesh to override the material.

The advantage of doing it this way is that mesh data is completely decoupled from materials, shaders, and rendering, and it allows you to arbitrarily swap buffers/textures on a mesh due to the late-binding. It also supports any number or type of attributes.

Also, for the love of god don't return vectors by value! It's better to stick to raw pointers in interfaces because they are the lowest common denominator (e.g. what if you decide later to switch from vector storage to another array class, you would be screwed).

I also see zero need for any sort of virtual functions for this stuff. The only place I use virtual functions is to wrap the buffers/textures/shaders in a common interface independent of graphics API, to keep graphics API from leaking in the rest of the code, and allow swapping API at runtime.

@Aressera Thanks for your reply! I have a question tho.

…it is a set of arbitrary buffers that each have an AttributeName

What are you using as an arbitrary buffer?, is it a list of floats or are you using a different abstraction?

for the love of god don't return vectors by value!

Yeah, not a huge fan of it too (also really expensive to do so). About the pointer suggestion, I know that its really cheap to pass it around and it doesn't change the ownership of the data but I think its confusing isn't it? For instance, if we use vec3* are we talking about an array of vectors or a single vector? In other languages we might code against an interface/protocol (i.e. Iterator<vec3>)) but I don't see it being used in cpp and I think it does't work in the interface context because of dynamic polymorphism right?

Charles117 said:
What are you using as an arbitrary buffer?, is it a list of floats or are you using a different abstraction?

I have a Buffer class that stores arbitrary data (array of bytes), with a type (e.g. float, Vector3f, etc.), size (number of type), and capacity (number of bytes).

Charles117 said:
About the pointer suggestion, I know that its really cheap to pass it around and it doesn't change the ownership of the data but I think its confusing isn't it? For instance, if we use vec3* are we talking about an array of vectors or a single vector?

Hence why I wrap the data in a Buffer object, and return a pointer to that, so that the Buffer can carry that information.

@Aressera Last question (hopefully)

arbitrary data (array of bytes)

If I want to retrieve the underlaying data from the buffer as a supplied type (e.g. convert array of bytes to array of vec3) what strategy do you recommend me to use? (if you have external resources like articles, posts, etc., for working with bytes it would be awesome). I remember that working with bytes is hard (alignment) and some times not portable (because of endianness) so I wonder if you made like your own conversion functions or casting works directly, etc..

Thanks for all the help!

Charles117 said:
what strategy do you recommend me to use?

I think you're overthinking it. Just static_cast<>() the buffer pointer to whatever type you need, write the data, then glBufferData() to upload the raw pointer. The buffer should already be aligned when the memory is allocated. The data is always in native endianness. Even better, you can memory-map the buffer (glMapBuffer) and write the data directly to the GPU buffer rather than needing an extra copy in CPU memory.

memory-map the buffer (glMapBuffer) …

Ooooh didn't know that one, will take a look at it!

@Aressera thanks a lot for answering, your explanations really helped me out!

This topic is closed to new replies.

Advertisement