Advertisement

Failing to create a Heap for 3D Render Target

Started by March 29, 2018 09:19 PM
12 comments, last by NikiTo 6 years, 10 months ago

		D3D12_RESOURCE_DESC text_resource_desc = {
			D3D12_RESOURCE_DIMENSION_TEXTURE3D,
			0,
			width,	// 256 aligned
			height,	// 512 aligned
			32,
			0,
			DXGI_FORMAT_R32_FLOAT,
		{	// DXGI_SAMPLE_DESC
			1,	// Count
			0	// Quality
		},
			D3D12_TEXTURE_LAYOUT_UNKNOWN,
			D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET
		};

		D3D12_RESOURCE_ALLOCATION_INFO resalocInfo = device->GetResourceAllocationInfo(
			0, // for single adapter(no sub-adapters)
			1, // num of resource descriptors
			&text_resource_desc
		);

		D3D12_HEAP_DESC heapDesc = {};
		heapDesc.SizeInBytes = resalocInfo.SizeInBytes;
		heapDesc.Properties = device->GetCustomHeapProperties(1, D3D12_HEAP_TYPE_DEFAULT);
		heapDesc.Alignment = D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT;
		heapDesc.Flags = D3D12_HEAP_FLAG_NONE;

		Microsoft::WRL::ComPtr<ID3D12Heap> heap;

		if (FAILED(device->CreateHeap( &heapDesc, IID_PPV_ARGS(&heap) ))) {
			std::cout << "Failed to create 3D heap" << std::endl;	// <--
		}
		else {
			std::cout << "Successfully created 3D heap" << std::endl;
		};

 

What about with the debug layer ? Any output hinting the why ?

Advertisement

I get this:  HRESULT: 0x80070057 (E_INVALIDARG))

I mean using D3D12GetDebugInterface and call EnableDebugLayer on the interface. There is 99.9% of chance that the log will display a real reason for the fail.

I have not the debug layer enabled. I never used it so far. I think debug layer is not explained in the book of Frank Luna. I will try to find some code example where rendering to 3D Target is done.

I think it is running out of memory, because when I reduce the height or width, it succeeds creating the heap.

Advertisement
5 minutes ago, NikiTo said:

I think it is running out of memory, because when I reduce the height or width, it succeeds creating the heap.

Out of memory is a different error code, unless you were asking a size above the DirectX limits. With D3D12, you should run most of the time with the debug layer and solve any issue as soon as possible, it is life saving !

It is as simple as installing the graphic tools ( start menu > optional feature > graphic tools ), then in your application, at the begining, call D3D12GetDebugInterface and enable the layer from the returned interface.

 

If you never did it, even if you are at the hello world triangle stage, chance you have dozens of errors and warning are pretty high. And consider that the layer is not a magical tool and many errors are not catch either !

 

I found that for 3D textures it has a limit of 2048 pixels. I was trying to give it more than 2048 height. This was the problem.

Sorry for asking, but googling directX 12 sux, because it mostly shows me results about directX 11.

I wanted to ask, if I use texture2d array as render target, would it do the same job, because it sux to divide the height into chunks of 2048 each? It would make my CPU side code much more complex. I was pretending to use GS to distribute the data to the slices of the 3D texture(but it failed). Can I distribute the rendering of data through the textures of a texture2D array, the same way, using the GS? (I need more than 8 slices to render to)

(my shader does lot of readings, and I don't want to call n-times drawinstanced to n-RTVs, because it would read each time the same data and this is redundancy. I want to read once and render to n-textures in the same shader)

Can you tell us the big picture of what you're trying to implement? 

You could bind your texture as a UAV instead of RTV if you want to be able to write into the whole thing arbitrarily. 

 

Btw, as mentioned above, you really should get the debug / validation layer working. Invalid use of the D3D12 API is undefined behaviour and can do nasty things -- e.g. It might work fine on your PC but crash the GPU on someone else's PC. The Validation layer catches the vast majority of mistakes and provides extremely helpful human-readable descriptions of the errors. I would go as far as to say that it is mandatory for development. 

He try to implement a mesh voxelization or fluid sim, or i am not a graphic engineer :) Sadly for you, geometry expansion shader are usually a pain in the ass of performance and are usually not the best pick for anything.

 

I would second Hodgman here, a little more details would help to design an efficient solution. Why did you try to create a texture 3D if only a texture 2D array is enough ? On some hardware, the RTIndex can be output by the vertex shader, and the bandwidth of reading a mesh is unlikely to matters when filling that much texture !

This topic is closed to new replies.

Advertisement