Hey! Welcome back to the 2nd day of my 100 days of VR!
We last left off going through the Unity Roll a ball tutorial. I’ve learned a lot about how to code in Unity, and how to use some of their existing API’s. Even more importantly, I learned how to navigate and use the Unity editor!
The editor is definitely different compared to Web or Mobile development where you create everything via code, but the editor definitely makes things easier!
Today in day 2, I started to look at the 2nd Unity lesson, the Unity Space Shooter, I’ve learned a lot about Unity from the previous tutorial, maybe even enough for me to start developing my own simple game (with a LOT of help from stackoverflow), but instead I’ve decided to solidify my foundations by going through more 1–2 more tutorials.
I didn’t get nearly as far as I did during Day 1 because of my job, but I did finish the First section:
Game setup, Player and Camera
Setting up the project
Unlike the previous tutorial where we just used simple materials and shapes, this time I had to use actual assets.
The first thing I had to do was download the assets that were provided from the tutorial.
I saw that when you create a new project in Unity. If you click on that tab, you can see the project, you can download it there (or from the Unity asset store)
And then when you create a new project, just make sure to add the asset package the resource:
After setting up the Unity tutorial, I found out that you can adjust the screen resolution of your game, by clicking on the Game and selecting the resolution options below it to set and create the resolution of your game.
So far so good, nothing too complex yet!
The Player GameObject
Now that I’ve setup the project, I’ve moved on to create the Player GameObject.
It’s nothing too different from what I’ve learned in the previous tutorial. Except this time, instead of creating a 3D sphere, I’m dragging an existing model asset provided to me.
Cool ship!
Some interesting things to note on this video
Mesh Collider, Mesh Filter, and Mesh Renderer
Looking at the components of the ship, we see that there are a Mesh Filter and a Mesh Renderer.
From a simple Google search:
Mesh Filter Components take in a Mesh, which is a 3D object that is created with triangles (triangles, because it’s the most efficient shape to process in GUI programming).
The Mesh Filter will then create the shape of what your GameObject will look like based off of the mesh that you provide it.
Afterwards, if you want to see your model, you’ll need a Mesh Renderer this takes in your mesh and applies a material over it. The material being the skin of the ship you see above.
Finally, we have the Mesh Collider Component that we add, that takes in a mesh. This allows us to create a collider that matches the exact shape of our ship.
It’s important to think about the gameplay vs. performance tradeoff that we have to make.
More fine-grain collider => Unity doing more work => Slower performance. However on the other hand, we might not want to create a cube collider over our ship.
If we do, we might collide with something on the front of the ship where there aren’t any meshes.
In a more fine grain control game like this space shooter, that would be bad.
Easy way to change your Game Transform values
Also on another un-related note, I found a nifty trick for the editor.
If you want to experiment with values, instead of manually putting the values in, you can actually left click on the properties circled in this picture
And then from there drag your mouse up and down to change the values. It’s much easier than just putting the values in manually!
Camera and Lighting
In the next video, we played around a bit with the camera and lighting.
Camera
In the camera component you can set the background. In our case, we just set a black background, making everything in the games tab black except for our ship.
Lighting
In the later Unity version, the project already comes with a Directional Light in the hierarchy, but these sources of light, as you expect, lights up your game!
The directional light is just light glowing down from a certain direction.
You can adjust things like the color of the light, how intense the light is, the direction the light comes from and many other things. However in this case, we just played with moving it around a bit and the intensity of the light to get our ship looking however we want it to look.
Adding a background
Creating a background is in a way very similar to adding a material to an existing shape object. For this project, we created a quad, which you can think of as a flat plane that you can’t change the Y scale at all.
We have an existing .tif file that’s provided for us to use to create an object, but our quad object only wants materials. How do we get this to work?
It turns out, if you were to drag the .tif file into the quad, it’ll automatically create the material for you that you can use.
Shaders
Another subject that was briefly talked upon are shaders. From my understanding, shaders are used for rendering effects on the material. It gives us the ability to apply brightness, colors, and other special effects on top of an existing image.
Luckily we don’t’ have to do learn how to do any of this (yet) and we can just use the existing ones that Unity provides us. Yay for frameworks!
In the video, we applied an Unlit Texture Shader affect to make our background image look brighter.
Moving the Ship
So finally now that we have our environment setup, we finally started to do some coding.
using UnityEngine;
using System.Collections;
[System.Serializable]
public class Boundary
{
public float xMin, xMax, zMin, zMax;
}
public class PlayerController : MonoBehaviour
{
public float speed;
public float tilt;
public Boundary boundary;
void FixedUpdate ()
{
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");
Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical);
rigidbody.velocity = movement * speed;
rigidbody.position = new Vector3
(
Mathf.Clamp (rigidbody.position.x, boundary.xMin, boundary.xMax),
0.0f,
Mathf.Clamp (rigidbody.position.z, boundary.zMin, boundary.zMax)
);
rigidbody.rotation = Quaternion.Euler (0.0f, 0.0f, rigidbody.velocity.x * -tilt);
}
}
Let’s break this code down.
RigidBody
The most important thing right now is that in Unity 5.X, this code actually won’t run. That’s because we no longer get our RigidBody component as we do now.
We have to do something similar to what we did with the roll-a-ball tutorial and save an instance of our RigidBody in the Start() function, like so:
using UnityEngine;
using System.Collections;
[System.Serializable]
public class Boundary
{
public float xMin, xMax, zMin, zMax;
}
public class PlayerController : MonoBehaviour
{
public float speed;
public float tilt;
public Boundary boundary;
public RigidBody rigidbody;
void Start()
{
Rigidbody = GetComponent<RigidBody>();
}
void FixedUpdate ()
{
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");
Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical);
rigidbody.velocity = movement * speed;
rigidbody.position = new Vector3
(
Mathf.Clamp (rigidbody.position.x, boundary.xMin, boundary.xMax),
0.0f,
Mathf.Clamp (rigidbody.position.z, boundary.zMin, boundary.zMax)
);
rigidbody.rotation = Quaternion.Euler (0.0f, 0.0f, rigidbody.velocity.x * -tilt);
}
}
We have to use GetComponent<RigidBody> to access our GameObject’s components now.
Boundary
The first thing you might notice is that there’s another class: Boundary. It has [System.Serializable] on top of it.
[System.Serializable]
public class Boundary
{
public float xMin, xMax, zMin, zMax;
}
What serializable means is that you’re telling Unity if you make the class a public variable it’ll expose the variables in the class for you to use in the inspector.
We’re going to use this boundary to set how far our ship can go in the game. Specifically, prevent our ship from going off screen.
Movement
To move the ship, we applied something very similar to what we did in Day 1 with the ball tutorial.
We see that we still use the same FixedUpdate() function and we grab the players movement input the same way.
The new interesting part is using more of Unity’s library.
Unity comes with its own math library, Mathf. The library provides nifty game calculations for us.
rigidbody.position = new Vector3
(
Mathf.Clamp (rigidbody.position.x, boundary.xMin, boundary.xMax),
0.0f,
Mathf.Clamp (rigidbody.position.z, boundary.zMin, boundary.zMax)
);
In this case, we’re using Mathf.Clamp to help set a position boundary so that if our position surpasses the minimum, it’ll stay at the minimum and if it surpasses the maximum, it’ll stay at the maximum.
Creating Shots
So now we have a ship and some movement, we moved on to something completely new, shooting things!
- We create our bullet the same we create our background.
- We make a Quad object and then use the .tif file to create a material (or just use the existing one) and attach the material.
- We applied some shaders on it. We can optimize our performance by using mobile shaders instead of normal ones. Mobile shaders offer less controls, but as a result they’re less computationally intensive.
Finally we have the code that moves our bullet forward:
using UnityEngine;
using System.Collections;
public class Mover : MonoBehaviour
{
public float speed;
void Start ()
{
rigidbody.velocity = transform.forward * speed;
}
}
The code itself is pretty straight forward, we just set the velocity of our RigidBody to be some value and it’ll always go in that direction.
Shooting shots
Now that we’ve created a prefab of our bullet in the previous lesson, we can start working on shooting.
In the video, what was shown was that we created a new “spawner” GameObject.
A spawner is basically just an empty GameObject that we use its location to create new bullets.
The important thing here is to make the spawner a child of the player GameObject, this way it’ll always stay consistent relative to our ship.
In our script, we added a public variable for both the bullet prefab and the spawn point location.
using UnityEngine;
using System.Collections;
[System.Serializable]
public class Boundary
{
public float xMin, xMax, zMin, zMax;
}
public class PlayerController : MonoBehaviour
{
public float speed;
public float tilt;
public Boundary boundary;
public GameObject shot;
public Transform shotSpawn;
public float fireRate;
private float nextFire;
void Update ()
{
if (Input.GetButton("Fire1") && Time.time > nextFire)
{
nextFire = Time.time + fireRate;
Instantiate(shot, shotSpawn.position, shotSpawn.rotation);
}
}
void FixedUpdate ()
{
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");
Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical);
rigidbody.velocity = movement * speed;
rigidbody.position = new Vector3
(
Mathf.Clamp (rigidbody.position.x, boundary.xMin, boundary.xMax),
0.0f,
Mathf.Clamp (rigidbody.position.z, boundary.zMin, boundary.zMax)
);
rigidbody.rotation = Quaternion.Euler (0.0f, 0.0f, rigidbody.velocity.x * -tilt);
}
}
The part of the code that we’re really interested in is in our new Update() function:
void Update ()
{
if (Input.GetButton("Fire1") && Time.time > nextFire)
{
nextFire = Time.time + fireRate;
Instantiate(shot, shotSpawn.position, shotSpawn.rotation);
}
}
Inside the code, we use the Input class to detect when the user clicks the fire button (left click) and check for a shooting delay.
The interesting part is how we create our bullet. This is done in Unity with the Instantiate function.
We just need to pass in the GameObject we want to create a copy of, its starting position, and its rotation.
And that’s it!
Conclusion
In day 2, I finished the first part of the Space Shooter tutorial. Once again, a lot of the learning is actually being done outside of code with the editor.
The goods news is that I’m beginning to see similar things with using GameObjects, materials, and prefabs to name some examples.
Code-wise, everything seems pretty straightforward. Now I might be biased, because I already know how to code, but I have a good feeling that we’ll see many of these API’s again.
It’s my hope that I’ll see patterns and that I use to make my own game.
My goal is to know enough what’s available in Unity so I can branch out and start writing my own code!
Side note: wow these writes up are long! I’m actually wondering what takes longer, going through the video’s and learning and understanding what’s happening or writing these explanations!
Visit the original Day 2
Go to the 100 Days of Unity VR Development Main Page
Read your posts and have to say that you are truly ambitious. A VR game in a 100 days is going to be difficult.
Like how you made these blogs. Keep up the good work.