-
So, my simple (not actually simple) goal is to create a junky pong game. I'm not really interested in a third dimension (3d) yet, but I was questioning whether or not to bother with ortho view or not. Can't I just set my Z axis to 0 at all times for all my objects? Viola, effectively 2d yes?
I've followed some basic tutorials on DirectX at DirectXProgrmaming.com, and I've come up with a basic Renderer class that manages the 3d stuff.
#ifndef RENDERER_H #define RENDERER_H #include <d3d11.h> #include <d3dx11.h> #include <d3dx10.h> #include "Vertex.h" // include the Direct3D Library file #pragma comment (lib, "d3d11.lib") #pragma comment (lib, "d3dx11.lib") #pragma comment (lib, "d3dx10.lib") // define the screen resolution #define SCREEN_WIDTH 1920 #define SCREEN_HEIGHT 1080 class Renderer { public: Renderer(HWND hWnd); ~Renderer(); // function prototypes void Update(); private: // global declarations IDXGISwapChain * swapchain; // the pointer to the swap chain interface ID3D11Device *dev; // the pointer to our Direct3D device interface ID3D11DeviceContext *devcon; // the pointer to our Direct3D device context ID3D11RenderTargetView *backbuffer; // the pointer to our Direct3D backbuffer ID3D11VertexShader *pVS; // the vertex shader ID3D11PixelShader *pPS; // the pixel shader ID3D11Buffer *pVBuffer; // the vertex buffer ID3D11InputLayout *pLayout; // the input layout ID3D11DepthStencilState *m_depthDisabledStencilState; D3D11_DEPTH_STENCIL_DESC depthDisabledStencilDesc; D3DXMATRIX m_orthoMatrix; void InitD3D(HWND hWnd); // sets up and initializes Direct3D void InitPipeline(void); void InitGraphics(void); }; #endif
I saw a tutorial here: http://rastertek.com/dx11tut11.html
But honestly, before I get too deep into that, I wanted to ask you all if...
1. Is there a better tutorial out there?
2. Is there something a little more basic, I mean, to help me understand what the heck ortho is in relation to the following things:A. Device
C. Device Context
E. Other graphics lingo I'm not super familiar with
I began trying to rip some of the code from the tutorial mentioned, and I was wondering if you all can help me understand what these things are?ID3D11DepthStencilState *m_depthDisabledStencilState; D3D11_DEPTH_STENCIL_DESC depthDisabledStencilDesc; D3DXMATRIX m_orthoMatrix;
Also, I have some added code to my InitD3D function, maybe you all can help me understand what's going on here?
//Set up Ortho Stuff D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1); ZeroMemory(&depthDisabledStencilDesc, sizeof(depthDisabledStencilDesc)); // Create Depth Stencil State with No Z Axis (Depth = FALSE) depthDisabledStencilDesc.DepthEnable = false; depthDisabledStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; depthDisabledStencilDesc.DepthFunc = D3D11_COMPARISON_LESS; depthDisabledStencilDesc.StencilEnable = true; depthDisabledStencilDesc.StencilReadMask = 0xFF; depthDisabledStencilDesc.StencilWriteMask = 0xFF; depthDisabledStencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; depthDisabledStencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR; depthDisabledStencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; depthDisabledStencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS; depthDisabledStencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; depthDisabledStencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR; depthDisabledStencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; depthDisabledStencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS; // craete the depth stencil state dev->CreateDepthStencilState(&depthDisabledStencilDesc, &m_depthDisabledStencilState); // set the depth stencil state devcon->OMSetDepthStencilState(m_depthDisabledStencilState, 0);
Finally, I'm not sure what my next step is?Currently I have my program rendering a square:void Renderer::InitGraphics() { unsigned long* indices; // create a square using the VERTEX struct Vertex OurVertices[] = { { D3DXVECTOR3(-1.0f,-1.0f,0.0f), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) }, { D3DXVECTOR3(-1.0f,1.0f,0.0f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) }, { D3DXVECTOR3(1.0f,1.0f,0.0f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) }, { D3DXVECTOR3(-1.0f,-1.0f,0.0f), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) }, { D3DXVECTOR3(1.0f,1.0f,0.0f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) }, { D3DXVECTOR3(1.0f,-1.0f,0.0f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) } }; // create the vertex buffer D3D11_BUFFER_DESC bd; ZeroMemory(&bd, sizeof(bd)); bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU bd.ByteWidth = sizeof(Vertex) * 6; // size is the VERTEX struct * 3 bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer // copy the vertices into the buffer D3D11_MAPPED_SUBRESOURCE ms; devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer memcpy(ms.pData, OurVertices, sizeof(OurVertices)); // copy the data devcon->Unmap(pVBuffer, NULL); // unmap the buffer } void Renderer::Update(void) { // clear the back buffer to a deep blue devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f)); // select which vertex buffer to display UINT stride = sizeof(Vertex); UINT offset = 0; devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset); // select which primtive type we are using devcon->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // draw the vertex buffer to the back buffer devcon->Draw(6, 0); // switch the back buffer and the front buffer swapchain->Present(0, 0); }
Is this drawing in ortho now?
I need a human to help me put it all together!
Last question, do I really need ortho? Can't I just do 3d in 2d? Like, just not use the Z coordinates?
Beginning DirectX, Ortho?
For 2D, you can use a perspective projection with a fixed z (typically 0) for all geometry (as you suggest), or you can use an orthographic projection. Either will work.
If the only reason you're shying away from orthographic is lack of familiarity, then I'd recommend pursuing orthographic, as it may offer some advantages over perspective in a purely 2D setting, and is arguably more conceptually clear and correct for that context.
The differences between the two approaches may be less than you're thinking. Ultimately the projection is just a transform. Although choice of projection can have implications elsewhere, it's entirely or mostly independent of the various other things you mentioned (devices, depth/stencil buffers, etc.).
A typical setup for pure 2D would be orthographic projection with the painter's algorithm and depth testing off. There are other ways to do it as well, such as perspective with the painter's algorithm, or orthographic with depth testing and varying z coordinates for sorting.
Also, keep in mind that if you don't need the z coordinates, you don't have to use them at all. You can just use 2D vectors for vertex positions.
28 minutes ago, Zakwayda said:For 2D, you can use a perspective projection with a fixed z (typically 0) for all geometry (as you suggest), or you can use an orthographic projection. Either will work.
If the only reason you're shying away from orthographic is lack of familiarity, then I'd recommend pursuing orthographic, as it may offer some advantages over perspective in a purely 2D setting, and is arguably more conceptually clear and correct for that context.
The differences between the two approaches may be less than you're thinking. Ultimately the projection is just a transform. Although choice of projection can have implications elsewhere, it's entirely or mostly independent of the various other things you mentioned (devices, depth/stencil buffers, etc.).
A typical setup for pure 2D would be orthographic projection with the painter's algorithm and depth testing off. There are other ways to do it as well, such as perspective with the painter's algorithm, or orthographic with depth testing and varying z coordinates for sorting.
Also, keep in mind that if you don't need the z coordinates, you don't have to use them at all. You can just use 2D vectors for vertex positions.
Thanks very much for your reply!
I am trying to get ortho to work, I'll also try removing the Z's from my vertex class and seeing if everything still works.
Are you able to comment on the code at all? Am I on the right track? Am I rendering in Ortho now?
Is there another step I need to take?
I tried rendering without the Z's, and it works!
// create a square using the VERTEX struct
Vertex OurVertices[] =
{
{ D3DXVECTOR2(-1.0f,-1.0f), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(-1.0f,1.0f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(1.0f,1.0f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) },
{ D3DXVECTOR2(-1.0f,-1.0f), D3DXCOLOR(1.0f, 0.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(1.0f,1.0f), D3DXCOLOR(0.0f, 1.0f, 0.0f, 1.0f) },
{ D3DXVECTOR2(1.0f,-1.0f), D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f) }
};
Also, is this the function that actually sets the projection to ortho?
// set the depth stencil state
devcon->OMSetDepthStencilState(m_depthDisabledStencilState, 0);
QuoteAlso, is this the function that actually sets the projection to ortho?
That function sets up the state for depth and stencil testing. In what you've posted so far I don't see code that actually establishes the projection transform, so I'm assuming it's in code you haven't posted.
Note that this:
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
Just sets up a matrix, and doesn't affect the pipeline in any direct way.
Two things you've asked about specifically are the projection transform and depth/stencil, so I'll just address those specifically. Although projection and depth/stencil can relate to each other in various ways, they perform different functions, reside at different stages in the pipeline, and are configured independently.
I don't know how much code you have, but if you're still not sure if you're doing things correctly, maybe you could post your code in its entirety, or if it's too much, host it externally somewhere. (Something that can be useful sometimes is to create a minimal working example contained in a single source file, as that can be easier for others to look over.)
9 hours ago, Zakwayda said:That function sets up the state for depth and stencil testing. In what you've posted so far I don't see code that actually establishes the projection transform, so I'm assuming it's in code you haven't posted.
Note that this:
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
Just sets up a matrix, and doesn't affect the pipeline in any direct way.
Two things you've asked about specifically are the projection transform and depth/stencil, so I'll just address those specifically. Although projection and depth/stencil can relate to each other in various ways, they perform different functions, reside at different stages in the pipeline, and are configured independently.
I don't know how much code you have, but if you're still not sure if you're doing things correctly, maybe you could post your code in its entirety, or if it's too much, host it externally somewhere. (Something that can be useful sometimes is to create a minimal working example contained in a single source file, as that can be easier for others to look over.)
I will post the code! Thank you for taking interest in my silly cause. I don't usually post all of it unless asked. I am not sure I've done things correctly, or, at least completely. Might have missed some steps?
Renderer.h:
#ifndef RENDERER_H
#define RENDERER_H
#include <d3d11.h>
#include <d3dx11.h>
#include <d3dx10.h>
#include "Vertex.h"
#include "Renderable_Object.h"
// include the Direct3D Library file
#pragma comment (lib, "d3d11.lib")
#pragma comment (lib, "d3dx11.lib")
#pragma comment (lib, "d3dx10.lib")
// define the screen resolution
#define SCREEN_WIDTH 1920
#define SCREEN_HEIGHT 1080
class Renderer
{
public:
Renderer(HWND hWnd);
~Renderer();
// function prototypes
void Update();
private:
// global declarations
IDXGISwapChain * swapchain; // the pointer to the swap chain interface
ID3D11Device *dev; // the pointer to our Direct3D device interface
ID3D11DeviceContext *devcon; // the pointer to our Direct3D device context
ID3D11RenderTargetView *backbuffer; // the pointer to our Direct3D backbuffer
ID3D11VertexShader *pVS; // the vertex shader
ID3D11PixelShader *pPS; // the pixel shader
ID3D11Buffer *pVBuffer; // the vertex buffer
ID3D11InputLayout *pLayout; // the input layout
ID3D11DepthStencilState *m_depthDisabledStencilState;
D3D11_DEPTH_STENCIL_DESC depthDisabledStencilDesc;
D3DXMATRIX m_orthoMatrix;
void InitD3D(HWND hWnd); // sets up and initializes Direct3D
void InitPipeline(void);
void InitGraphics(void);
};
#endif
Renderer.cpp:
#include "Renderer.h"
Renderer::Renderer(HWND hWnd)
{
InitD3D(hWnd);
InitPipeline();
InitGraphics();
}
Renderer::~Renderer(void)
{
swapchain->SetFullscreenState(FALSE, NULL); // switch to windowed mode
// close and release all existing COM objects
pVS->Release();
pPS->Release();
swapchain->Release();
backbuffer->Release();
m_depthDisabledStencilState->Release();
dev->Release();
devcon->Release();
}
// this function initializes and prepares Direct3D for use
void Renderer::InitD3D(HWND hWnd)
{
// create a struct to hold information about the swap chain
DXGI_SWAP_CHAIN_DESC scd;
// clear out the struct for use
ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC));
// fill the swap chain description struct
scd.BufferCount = 1; // one back buffer
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color
scd.BufferDesc.Width = SCREEN_WIDTH; // set the back buffer width
scd.BufferDesc.Height = SCREEN_HEIGHT; // set the back buffer height
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used
scd.OutputWindow = hWnd; // the window to be used
scd.SampleDesc.Count = 4; // how many multisamples
scd.Windowed = FALSE; // windowed/full-screen mode
scd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; // allow full-screen switching
// create a device, device context and swap chain using the information in the scd struct
D3D11CreateDeviceAndSwapChain(NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
NULL,
NULL,
NULL,
D3D11_SDK_VERSION,
&scd,
&swapchain,
&dev,
NULL,
&devcon);
// get the address of the back buffer
ID3D11Texture2D *pBackBuffer;
swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
// use the back buffer address to create the render target
dev->CreateRenderTargetView(pBackBuffer, NULL, &backbuffer);
pBackBuffer->Release();
// set the render target as the back buffer
devcon->OMSetRenderTargets(1, &backbuffer, NULL);
// Set the viewport
D3D11_VIEWPORT viewport;
ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = SCREEN_WIDTH;
viewport.Height = SCREEN_HEIGHT;
devcon->RSSetViewports(1, &viewport);
//Set up Ortho Stuff
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
ZeroMemory(&depthDisabledStencilDesc, sizeof(depthDisabledStencilDesc));
// Create Depth Stencil State with No Z Axis (Depth = FALSE)
depthDisabledStencilDesc.DepthEnable = false;
depthDisabledStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthDisabledStencilDesc.DepthFunc = D3D11_COMPARISON_LESS;
depthDisabledStencilDesc.StencilEnable = true;
depthDisabledStencilDesc.StencilReadMask = 0xFF;
depthDisabledStencilDesc.StencilWriteMask = 0xFF;
depthDisabledStencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthDisabledStencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthDisabledStencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
depthDisabledStencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// craete the depth stencil state
dev->CreateDepthStencilState(&depthDisabledStencilDesc, &m_depthDisabledStencilState);
// set the depth stencil state
devcon->OMSetDepthStencilState(m_depthDisabledStencilState, 0);
}
void Renderer::InitPipeline()
{
// load and compile the two shaders
ID3D10Blob *VS, *PS;
D3DX11CompileFromFile("Shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, 0, 0);
D3DX11CompileFromFile("Shaders.shader", 0, 0, "PShader", "ps_4_0", 0, 0, 0, &PS, 0, 0);
// encapsulate both shaders into shader objects
dev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
dev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);
// set the shader objects
devcon->VSSetShader(pVS, 0, 0);
devcon->PSSetShader(pPS, 0, 0);
// create the input layout object
D3D11_INPUT_ELEMENT_DESC ied[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
dev->CreateInputLayout(ied, 2, VS->GetBufferPointer(), VS->GetBufferSize(), &pLayout);
devcon->IASetInputLayout(pLayout);
}
void Renderer::InitGraphics()
{
unsigned long* indices;
// create a square using the VERTEX struct
Vertex OurVertices[] =
{
{ D3DXVECTOR2(-0.5f,-0.5f), D3DXCOLOR(0.5f, 0.0f, 0.0f, 0.5f) },
{ D3DXVECTOR2(-0.5f,0.5f), D3DXCOLOR(0.0f, 0.5f, 0.0f, 0.5f) },
{ D3DXVECTOR2(0.5f,0.5f), D3DXCOLOR(0.0f, 0.0f, 0.5f, 0.5f) },
{ D3DXVECTOR2(-0.5f,-0.5f), D3DXCOLOR(0.5f, 0.0f, 0.0f, 0.5f) },
{ D3DXVECTOR2(0.5f,0.5f), D3DXCOLOR(0.0f, 0.5f, 0.0f, 0.5f) },
{ D3DXVECTOR2(0.5f,-0.5f), D3DXCOLOR(0.0f, 0.0f, 0.5f, 0.5f) }
};
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeof(Vertex) * 6; // size is the VERTEX struct * 3
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeof(OurVertices)); // copy the data
devcon->Unmap(pVBuffer, NULL); // unmap the buffer
}
void Renderer::Update(void)
{
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(Vertex);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// draw the vertex buffer to the back buffer
devcon->Draw(6, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
}
That's all there really is, just let it be assumed that I am calling the Renderer::Update(void) function in my main loop in main.cpp.
This bit of code draws a pretty good square
But, is it proper ortho yet?
Could you post the shader code as well? (Presumably the contents of 'Shaders.shader'.) I have some suspicions based on a quick glance over the code, but it would help to see what the shaders are doing.
6 hours ago, Zakwayda said:Could you post the shader code as well? (Presumably the contents of 'Shaders.shader'.) I have some suspicions based on a quick glance over the code, but it would help to see what the shaders are doing.
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = position;
output.color = color;
return output;
}
float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET
{
return color;
}
Here is the Shader code, the only thing I understand about this is that the vertex shader is called every time we render a vertex, and the pixel shader is called every time we render a pixel.
Wait a minute, something just dawned on me. You mentioned transformation... Does the transformation occur at the shader level? On the vertex shader to be specific?
I've never worked with shaders before now using directx.
Close now, let me look over the tutorial again, and pay closer attention to the shaders.
17 minutes ago, JWColeman said:Wait a minute, something just dawned on me. You mentioned transformation... Does the transformation occur at the shader level? On the vertex shader to be specific?
I've never worked with shaders before now using directx.
Close now, let me look over the tutorial again, and pay closer attention to the shaders.
I haven't looked at the code, but I'm pretty sure the transformation is in the vertex shader level. In my poor understanding, vertex shader is responsible to translate vertices from object-space to screen-space.
http://9tawan.net/en/
48 minutes ago, JWColeman said:Wait a minute, something just dawned on me. You mentioned transformation... Does the transformation occur at the shader level? On the vertex shader to be specific?
Vertex transformation can happen in more than one place, but it's commonly done in a vertex shader function (VShader() in your code), as you suggest.
From what you've posted, it looks like you're not transforming your vertices. All your shader code does is pass the vertices through unchanged. Although you do have this:
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
All that does is fill out the elements of the D3DXMATRIX instance that you submit as the first argument (m_orthoMatrix in this case). By itself it has no effect on the render or pipeline state, and is basically a 'no-op'.
Also, with respect to this:
//Set up Ortho Stuff
D3DXMatrixOrthoLH(&m_orthoMatrix, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1);
ZeroMemory(&depthDisabledStencilDesc, sizeof(depthDisabledStencilDesc));
I'm not sure if the comment is meant to apply to just the first line, or to both lines, but just to be clear, the second line has to do with the depth/stencil state and is unrelated to the projection transform.
The reason your code is successfully rendering a quad is that the coordinates +/-0.5, in normalized device coordinates, yield a quad in the center of the screen. It's possible that when you were using a perspective transform, the transform was also not being applied, and you were getting the same results you're getting now. If that's the case, although it looked like both transforms yielded the same results, in actuality it's only because neither transform was actually being applied.
There's a lot to even a simple programmable pipeline, so I wouldn't expect to understand it all at once. The typical approach here would be to load the transform matrix into a shader variable, and then apply it to each vertex in the shader code (in VShader() in your case). I'm not sure what tutorials you're working from now, but many if not most tutorials on modern graphics API will cover how to do this.
Edit: I'll add one more thing, in case it's helpful. Your call to D3DXMatrixOrthoLH() sets up an orthographic transform with dimensions 1920x1080. If that transform were actually being applied, your 1x1 quad would be at most a small dot onscreen. The fact that the quad (I'm guessing) takes up a good portion of the screen instead is a clue that the projection transform isn't being applied. As such, once you do get the projection transform applied, barring other changes, don't be surprised if your quad becomes very small. Also note that there's no requirement that the size of the projection match the size of the screen. The aspect ratio should match if you want to avoid distortion, but otherwise, the projection can be arbitrarily sized.