Augmented reality game created for the PlayStation Vita. The player uses the position and orientation of the controller to bounce the ball against the targets in the scene. The goal is to clear all targets in the shortest amount of time. Power-ups help to increase the speed at which the user can destroy targets.
The game uses PS Vita’s inbuilt marker tracking framework to overlay virtual objects on top of the camera feed. Given a marker ID, the Vita will return a transformation matrix for the marker in camera/view space. The transform can then be used to position virtual objects in the same space as the markers in the real world.
Currently, I don’t have permission to share the code for this project. Instead, a description of the core components is given below.
The game only requires the core marker (marker with ID 0) to be visible. However, additional markers can be used to increase the play area for the game. It does this by storing matrices to convert from the given supplementary marker transform to the expected transform of the core marker. The game includes a separate calibration screen to guide the user through a process to ensure accurate calculation of these supplementary transforms.
The calibration takes just a second. In this time it will calculate a supplementary transform for each marker. It does this with the following formula.
Ms = Mc * M-1
Where Ms is the new supplementary transform for this marker M is the given transform for this marker and Mc is the transform for the core marker. Effectively giving us a matrix Ms that will transform from M to Mc. A running average for the supplementary matrix is maintained to reduce the chance of error in the calculated transform. A low number of samples for a markers transform doesn’t mean it will be inaccurate. It does mean, however, that there is a greater chance that the calculated transform could be inaccurate. To help the user avoid these issues, markers with low sample counts are reported after calibration.
The blue plane is placed at the origin in the scene. The player can move the Vita to focus on each of the surrounding markers to ensure that the calculated transforms are accurate enough. i.e. The plane should not move too much when the core transform is calculated using the supplementary marker transforms.
The InputManager is responsible for processing a list of given InputContexts. Each InputContext binds to input events it wants to know about. There are three different event types.
- Actions are triggered when a key press is registered
- States are triggered each time a key is pressed or released
- Ranges are for analogue inputs like touch screen or joystick input
When the InputContext is created, it binds callbacks to events for a specific key or range. When the InputManger recognises the event, it dispatches the callback for the context. This allows different GameStates to define key bindings without the requirement to poll the current state of the keys, only being told about changes that it is interested in.
When assigned to the InputManager, InputContexts are given a priority. This priority defines the order in which events are processed. A higher priority context will be the first to hear of an event, it can then let the manager know that the event has been handled so that it is not passed on to lower priority contexts.
Although reasonably complicated for a prototype the InputManager simplifies the creation of different game states and allows the GameStateManager to define its own global control scheme.
A rather complex scene hierarchy is used to control scene management. The scene hierarchy is a tree made up of connected SceneNodes. In a similar fashion to Unity and UE4, each SceneNode contains a list of Components that have specific uses. Some are already defined as part of the engine these are the Camera, MeshInstance and RigidBody components. Custom Components are defined for specific game objects, these are BreakoutBall, BreakoutBat, BreakoutBoard, Projectile and ARCamera. Each is responsible for managing their own game state. This makes adding new GameStates simple, any gameplay logic can be added as a component which will then do what it needs to. This avoids the need for a polling update in GameStates maintaining a simple interface.
The SceneNode and Component scheme is a nice way of making game objects configurable without using long inheritance hierarchies or multiple inheritance. Any custom nodes that are dependent on default engine components can add them to the node on initialisation. For example, the BreakoutBall wants to be rendered and needs to have a velocity so it is dependent on the MeshInstance and RigidBody components. While the BreakoutBoard is static and thus doesn’t require a RigidBody component so it only adds a MeshInstance component. For a small application like this the benefits are rather small but as more functionality is required (e.g. Sound, animation) the SceneNode and Component style allows for easy customisation and allows for changing functionality at run-time.
Game state management
The GameStateManager is the top most level of the game specific objects. The game state manager instantiates and activates different GameStates on request. The three GameStates in the application are MenuState, PlayBreakoutState and CalibrationState. Each state is given access to the GameStateManager resources and told when it is being activated and deactivated, where it adds objects to the scene and provides the manager with its own InputContext.
The GameStateManager is also responsible for the scene, input management and marker management. On initialisation, the GameStateManager populates the scene with the camera and some attachment points that the GameStates can add nodes to. These attachment points are scaled to the marker size so that all child objects work in units of markers rather than metres. This means that the GameStates can add objects to the scene and scale them based on how large they should be compared to the marker, working in standard integers is easier than working in multiples of the marker dimensions. Estimation of correct scales and positions is much easier like this.
There are a couple of ways that the marker transform could be used to position the scene. The marker transform can be used as the scene origin, effectively moving objects in the scene around a stationary camera. Or the inverse transform can be applied to the camera so that the camera is instead moved around a stationary origin. Since the markers are stationary in the application, the latter was chosen as it made positioning objects in the scene more intuitive. Especially since we use the camera location as a game mechanic. This also makes the game more portable as this is the standard setup in 3D rendering.
There are a few utility classes for collision detection and mesh generation. These are quite basic; the only noteworthy function is the plane and sphere collision. This controls how the bat and the ball interact in the game.
The bat is represented in the scene as a finite plane, composed of an origin, normal and extents. The collision detection is relatively simple and can be broken down into three distinct steps.
- Check to see if the sphere collides with the infinite plane based on the closest point and the sphere radius.
- Get the radius of the circle created by the sphere and plane intersection
- Check that this circle fits on the finite plane
This StackExchange discussion was helpful in figuring this out
This is not incredibly complex but allows for non-axis-aligned collision between the bat and ball. All other collision between objects is done using standard sphere and box colliders.