Advertisement

Relation between descriptors and resources

Started by August 07, 2019 10:15 PM
3 comments, last by zhangdoa 5 years, 5 months ago

Hi,

I have more of a broad abstraction design question that probably won't have one ultimate answer.
In many D3D12 samples (also in UE4), a buffer/texture is usually wrapped by a class that contains data about the width, height, stride, element count, .... and when the resource is created, a set of descriptors (SRV, UAV per mip, CBV, ...) are usually created depending on how the resource is going to be used (Rendertarget, Shader resource, Unordered Access, ...) and these descriptors are owned by that class. To give an example of the pattern that you usually come across (pseudo-code)


class Texture
{
	void CreateTexture(int width, int height, int mipLevels)
	{
		CreateResource(width, height, ....);
		m_Uav = AllocateDescriptors(mipLevels)
		for each mip:	
			CreateUAV(m_Uav.offset(i));
		CreateSRV();
		CreateRTV();
		....
	}

	int Width, Height, more properties ...
	D3D12_CPU_DESCRIPTOR_HANDLE m_Rtv = {};
	D3D12_CPU_DESCRIPTOR_HANDLE m_Uav = {};
	D3D12_CPU_DESCRIPTOR_HANDLE m_Srv = {};
};

However, I find myself in a situation where I need different types of views for the resource and there is not a catch-all solution for it.
For example, I create a TextureCube and a SRV for it.
You could create 

  • a TextureCube SRV
  • a Texture2DArray SRV
  • Several Texture2D SRVs

This all depends on use and more than one view could be needed for the same resource.
Same for a depth buffer, if you want a depth buffer to be used for both writing and read_only, you need 2 separate descriptors (one with a READ_ONLY flag set).

I believe what makes views/descriptors so powerful, is that they provide you with different ways to interpret read the same data.
Having this "class wrapper" pretty much breaks this flexibility because all descriptors are created the same for the different types of resources you define in your abstraction and it is impossible to cover all uses.
Obviously, the solution would be to decouple the resource from the view but I wonder, how is this usually done?
Is a solution creating these descriptors on-the-fly, possibly even every single frame?

I suppose this is not specific to DirectX 12 and pretty much applies to any Graphics API

Thanks,

Simon

In my engine, we deal with this situation by allowing the user to create more than one "Texture" wrapper object that refers to the same resource. We do this by allowing you to create a texture object and pass the one that you're "aliasing" as a parameter e.g.


Texture* Device::CreateTexture( ..... );
Texture* Device::CreateTextureAlias( Texture& original, .... );

The "Texture" objects that are created by the second function have an internal flag set, which identifies them as an alias and not the actual owner of the resource. When they're destroyed, they do not release the resource. Also, it's up to the user of my API to ensure that they do not continue to use any of the "alias" texture objects after the "original" texture object has been destroyed (as this does release the resource!). You could relax that restriction by using reference-counting, etc, internally.

So, to render a cubemap, I first create a cubemap texture. Then I'd create 6 more texture objects using the "CreateAlias" version, pass the cubemap as the "original", and specify an array slice offset of 0/1/2/3/4/5 and a slice count of 1.

Advertisement
8 hours ago, Hodgman said:

In my engine, we deal with this situation by allowing the user to create more than one "Texture" wrapper object that refers to the same resource. We do this by allowing you to create a texture object and pass the one that you're "aliasing" as a parameter e.g.



Texture* Device::CreateTexture( ..... );
Texture* Device::CreateTextureAlias( Texture& original, .... );

The "Texture" objects that are created by the second function have an internal flag set, which identifies them as an alias and not the actual owner of the resource. When they're destroyed, they do not release the resource. Also, it's up to the user of my API to ensure that they do not continue to use any of the "alias" texture objects after the "original" texture object has been destroyed (as this does release the resource!). You could relax that restriction by using reference-counting, etc, internally.

So, to render a cubemap, I first create a cubemap texture. Then I'd create 6 more texture objects using the "CreateAlias" version, pass the cubemap as the "original", and specify an array slice offset of 0/1/2/3/4/5 and a slice count of 1.

Thanks for the clear explanation!
That sounds like a reasonable solution and I can see that working in the majority of the cases.
However, I imagine there could be a lot of book keeping if the so called Texture Alias can modify (eg. resize) the texture resource itself as it would have to update the properties on all Texture objects that reference that resource, not to mention, the amount of metadata duplication. Or is this alias a separate type that has limited functionality?

Thanks :) 

In my engine's graphics module I designed a ResourceBinder class to abstract all the views/descriptors around all the underlying APIs. The real mesh, texture and buffer wrapper classes are designed just as some Mesh/Texture/GPUBufferDataComponent classes (may unify to one GPUMemoryComponent in the future) which only owns an OpenGL/DirectX/Vulkan resource pointer or handle and a (few) ResourceBinder references. There are no ActivateTexture or PSSetSRV or BindSomethingElse interface, instead only an ActivateResourceBinder that user-level code could access, and all the polymorphism/implementation details would be resolved in runtime (or compile-time in the future maybe). 

If you're familiar with DirectX (especially DX12) or Vulkan thus I assume you've managed heap video memory explicitly, because of this kind of trends that I thought why we bother ourselves with some "Mesh"/"Texture"/"ConstantStructuredByteOffsetBlaBlaBuffer" blob classes design, what we are doing every day is just uploading some bytes to GPU memory and issuing some computation tasks to GPU, what we need at final are just a raw GPU memory address and some different "views" of that memory for different usages. What we need to change is our OpenGL 2.1 mind!

Rendering client code example:


//Start to record commands...
auto l_renderingServer = g_pModuleManager->getRenderingServer();

//m_SDC, SamplerDataComponent, a sampler wrapper
//m_RPDC, RenderPassDataComponent, an aggregation of render target textures, pipeline state object and shader object
//l_CameraGBDC, GPUBufferDataCompoent, our friends whose name are UBO/SSBO/ConstantBuffer/StructuredBuffer...
l_renderingServer->CommandListBegin(m_RPDC, 0);
l_renderingServer->ActivateResourceBinder(m_RPDC, ShaderStage::Pixel, m_SDC->m_ResourceBinder, 17, 0);
l_renderingServer->ActivateResourceBinder(m_RPDC, ShaderStage::Pixel, l_CameraGBDC->m_ResourceBinder, 0, 0, Accessibility::ReadOnly);
l_renderingServer->ActivateResourceBinder(m_RPDC, ShaderStage::Pixel, SunShadowPass::GetRPDC()->m_RenderTargetsResourceBinders[0], 13, 7);
l_renderingServer->DispatchDrawCall(m_RPDC, a_mesh_from_nowhere);
// Deactivate ResourceBinder...
l_renderingServer->CommandListEnd(m_RPDC);
//Execute commands when it's a good day...

 

And the ResourceBinder classes:


class IResourceBinder
{
public:
	ResourceBinderType m_ResourceBinderType = ResourceBinderType::Sampler;
	Accessibility m_GPUAccessibility = Accessibility::ReadOnly;
	size_t m_ElementCount = 0;
	size_t m_ElementSize = 0;
	size_t m_TotalSize = 0;
};

class DX12ResourceBinder : public IResourceBinder
{
public:
	D3D12_CPU_DESCRIPTOR_HANDLE m_CPUHandle;
	D3D12_GPU_DESCRIPTOR_HANDLE m_GPUHandle;
};

// Use union or template class would be a little bit better
class DX11ResourceBinder : public IResourceBinder
{
public:
	ID3D11SamplerState* m_Sampler = 0;
	ID3D11ShaderResourceView* m_SRV = 0;
	ID3D11UnorderedAccessView* m_UAV = 0;
};

class GLResourceBinder : public IResourceBinder
{
public:
	GLuint m_Handle = 0;
};

 

The real resources classes:


class GPUBufferDataComponent //Or TextureDataComponent or blabla
{
public:
	//Sadly we can't change heap type in runtime freely at present
  	Accessibility m_GPUAccessibility = Accessibility::ReadOnly;
	IResourceBinder** m_ResourceBinders = 0;
  	/*Trivial member here...
  	*/
};

class DX12GPUBufferDataComponent : public GPUBufferDataComponent // Similar for DX11/OpenGL/Vulkan even Metal, not an API problem at all
{
public:
	ID3D12Resource* m_ResourceHandle = 0;
  	/*The Descs...
  	*/
};

 

Literally speaking in these kinds of design you could create any kinds of and any number of ResourceBinders for the same region of GPU memory. A "cubemap" or a 6 slices 2D array "texture" or a "VertexBuffer", the only limitation is the underlying API you're targeting at (We are lucky, and we are unlucky:(). And of course, the cost, any kinds of abstraction is the cost, and any chasing for a general and flexible solution, any noobie code like what I showed above, they are all cost that haunting your mind at the midnight before you release your Minecraft. And then what I thought is, why bother again? Why don't we just stick with one API tightly straightly and nakedly without any daydream for free wrapping lunch?O.oo.O:ph34r:

 

This topic is closed to new replies.

Advertisement