Monday, November 26, 2012

Controlling A Fighter

Disclaimer: This blog post is a little more boring than normal, it talks about how I set up the new controller for the fighter characters. Basically it's about how I refactored the old controller into a more modular setup and improved upon the systems individually.



The new Character Controller is a huge improvment to the Superverse expereince, designed to allow designers a great deal of control over the way fighters interact with one another, as well as create an easily expandable, modular base from which to allow developers to continue creating and implementing new functionality.


The Superverse FighterController System is broken into several parts, each a behavior on the same object. The behaviors all inherit FighterControllerBehavior, which inherits the project's BaseBehavior. All FighterControllerBehavior have convenience functions for accessing fellow behaviors, as well as certain resources, such as the raw control data.


All FighterControllerBehavior follow the same class naming convention: Fighter[Responsibility]Controller


They are as follows

FighterMovementController
Responsible for moving the fighter around the map. Handles nothing about fighting moves. Uses only the movement aspect of controls. Will handle lower animations (idle - crouching - jumping - walking - running) when those are present.

FighterInputController
Responsible for updating the inputs, and reading particular changes. Acts more as an input util for other behaviors, especially Movement and Primary.

FighterCollisionController
Handles collision data from the FighterColliders. Uses a bitmask to figure out which ones it is listening to. Forwards hit in its own event if bitmask return match.
FighterColliders are behaviors that manage the individuals section of the biped. Each can have the bit they represent set, or can calculate it based on the name of the object they are on (convenience for operating with Max's Biped Rig) They each know their parent controller, and send hit to it with their bit flag.

FighterPrimaryController

Controls the animations and handels the general state of the fighter. Uses most of the other controllers to do so, wither directly or through events.

They are:

Fighter Statistics:
Handles the players energy, as well as defense and other aspects, currently controls gui as well.
Respawner:
Respawns player on death, currently would be used if they hit the killing volume.

Material Flasher:
Flashes all the renderers that are children of the behavior a different color and back.

There are also components like behaviors that act as data components and utilitys.


FighterStatistics:

There are also AttackBehaviors, classes that use energy and have a delay to for the special attacks that the player will use









On GUI

GUI in Unity can be a pain. GUI anywhere can be pain, but Unity's system for GUI is especially annoying, not to mention grossly inefficient.

A Note on Unity GUI Inefficiency and Why to Use OnGUI as Little as Possible
The OnGUI function is called via reflection for every event that fires. Because of this, you really do not want to put any logic other than the GUI Drawing logic in the GUI functions. Also, if you are only drawing textures and data to the screen, a lot of the calls are a huge waste. All of Unity's GUI is reliant on Events, namely Event.current. If you you are not looking for something with keyboard or mouse (textfields and buttons, for example) then a good deal of the time the GUI functions you call are doing nothing, since the actual drawing, which is all that is used for textures, boxes, labels, etc, only happen when the Event is a Repaint event. Unity's GUI functions wont draw if the event is not a repaint event, but if you are drawing alot of things, then it's a lot of time being spent on nothing. Again, if you are using GUI that reacts to input, like textfields, buttons, scrollbars, etc, then you need other events, since the state of those controls are dependent on said events. As mentioned above OnGUI is called through reflection, so use it as few times as possible(reflection overhead plus being called for every event that goes through can take a considerable toll.)

GUI and MVC
There are many ways to approach GUI in games, and there are different scenarios where different methodologies work (such as in-game vs in-menu).
Personally, I like to approach GUI in the style of Model-View-Controller.
There are many ways to implement this architecture, but I'll try to simplify the normal use in game terms.


Basically, the view is your GUI, which displays a representation of data(the model) in you game. It cares nothing about your game, the state of you game, it just listens to the model and controller, and reacts accordingly. So if we are displaying health, the model tells the view that damage has been dealt, the view gets the new health and start tweening towards the new value.

This data is your model, it doesn't really interact with anything but itself, though it can fire off events. It can have a behavior to itself, for instance, health might recharge if no change has happened, or it could start recharging once the controller has told it to(depending on how you've set it up). The model doesn't care about anything but itself. it doesn't really care about your game, or how it's displayed. It usually doesn't directly manipulate the controller or the view, rather it notifies both that certain things have happened. So if you a player take damage, the model (containing the health) notifies the view, which pulls the new data to display. If the player has no health left, the model notifies the controller, which kills the player. These notifications are usually implemented in either Observers or other types of event systems. Personally, I have found C Sharp's built in event system to be well suited for the task.

The controller is manipulating your model, and is where the input from this system comes from. Note that if you are using the view for input (text fields, buttons) the controller could be listening to your view as well. Going back to our previous example, the controller would see that the player was hit, gets the damage dealt, and calls a method on the model to apply the damage. The controller may also get data from the model(for instance checking energy to see if an attack can be used.)



Again there are many variations of MVC, and some follow rules that others do not, but the important part is separating control from data/behavior from display.

In Unity, you might have 3 behaviors in Unity, to get the example above.
PlayerController - Controller
PlayerStatistics  - Model
PlayerStatisticsDisplay - View

By separating these so explicitly, changes can be made to each without drastically affecting the others. So the way the controller manipulates the model can change without affecting the view at all, and the view can be drastically changed, without having to rework any of the data.


Applying this to Crisis of the Superverse

In the Crisis of the Superverse healthbar, their are three "views", the conflict compass, the tug of war, and the resolve meters at either end. These view classes are called ConflictDisplay, TugOfWarDisplay, and ResolveDisplay, and hook into the model classes named Conflict, TugOfWar, and Resolve, respectively.
TugOfWar contains the other models within itself, and when damage is applied to the TugOfWar, it sends that damage to Conflict, if Conflicts health( depicted by the rotating compass) is on either end, it returns the remainder of the damage to TugOfWar, which moves the bar accordingly. Once the bar is on one side, it applies any remainder of damage to the resolve. From the RoundController, the controller, its as simple as calling a method to apply the damage, the model handles the rest. The views tween their displayed value towards that of the model.
The energy bars, connect to the PlayerStatistics, which are controlled by the PrimaryFighterController, in similar fashion.



Friday, October 12, 2012

Week 3/4/5?

What week is it now? Seriously I don't even know. Classes seem to all melt into one another.

Work has been pretty overwhelming. Luckily, both AI and Networking have pushed back expected due dates, so I have been able to catch up.

Anyway, as for Senior Team...

These past few weeks have been very interesting.

We split up our prototype into individual mechanics and iterated on each individually. The main things that we iterated on were controls, camera and health system.
In doing so, we have discovered our purpose for making this game, which makes me very excited.

The controls were a main priority. Both the team and tester feedback support our move towards the flickit controls, using the right and left bumpers to set whether the fighter is in Punch or kick mode, and the using the right stick to attack. Unity's input system, either because of the way is built or because of an inherit delay from the hardware, seems to be sluggish when responding to input. It it subtle but is enough to notice after a while. I am planning to try using the Raw Axis data from the controllers as input  We will lose much of unity input benefits, but should gain a good bit of precision for the input data.

[ A week of ignoring this blog post later... ]


This past week was spent learning Strumpy's Shader Editor and working on several iterations of the shader we want to use for some of the comics.

Through a mixture of light manipulation, rim darkening we got the comic look, and manipulating the light ramp texture allows us to perhaps use it for various ages.

I added a screen detail aspect to the shader itself, with uses the screen coordinated of the fragment to use as uvs on a texture.

I used a model of superman, since he was of similar proportions and color scheme, and because I could compare my results with actual superman comics from that age.

In the end though, it seems Rebecca does not want to go with this style. While I think that making each character have a distinct age feel would be much cooler artistically, the reality is that it presents a large number of issues. Firstly is in shader requirements. having multiple styles means very different materials and probably multiple shaders, which can take a lot of time. I had proposed to counter this with tools like SSE, and I think I was moderately successful in creating the Golden Age shader, but by making all of the characters follow a similar art style, we can simplify out modeling pipeline and reduce the number of shaders that we need.

It will also help in terms of animation reuseability. One thing that really concerned me about styling the characters after different ages was the difference in mesh proportions. For instance, Superman's(Golden Age) upper body is massive compared to Spidermans(Silver Age). This could easily create issues where certain animation look good on one, but clip badly on another.

Finally, and most importantly, I think going for the more modern depiction of superheroes("amazing Spiderman") contributes more towards our goal of bringing new people into the genre. Our basis for the superhero route was use the increasingly popular superhero trend in recent years. Most people within our target audience would likely be unfamiliar with the styles of the Golden Age/Dark Age etc from the actual comics, but would be familiar with a more modern style seen in shows like Justice League/Batman/Spiderman etc. Because of this I think the more modern look would serve our purpose well, and probably better than a more classical era specific style for each character.

[And Another Week Gone by without posting (Bad Adam Bad)]

This week I focused on Art Pipeline for animations. We have been planning on using Motion Capture data as the basis of many of our attacks, but there are many steps to this that we need to make sure that we have down before we go into production. I learned a good bit about 3DS Max, and will continue to do so. I went through some tutorials on cleaning up mocap animation, and tried to do some myself with mixed results.Essentially, what I would like to do is use .BIP files, a biped animation file used in max, as the basic animation data. A good bit of motion capture data is found in the BVH format (Biovision Hierarchy), and can easily be converted to BIP in max. BIP file (Biped Animation) are easy to transfer between bipeds, so hopefully this will make our use of them easier.

I plan meeting Josh Buck on Monday with Rebecca, to see if he agrees and if he has any suggestions for our art pipeline. This weekend I will be revamping the controls to be sharper and more concise  as well as bringing in more animations  and putting them to a character controller. By next week, I hope to be able to show some really cool stuff.

I also included Lumos Analytics into our game, and used it during our testing sessions to watch for attack use as well as how many times players used the blocks. There were some interesting results from our first two play sessions, and we will need to address them in the coming weeks. At the moment it is hard to judge because of how little difference there is to the attacks. I will be adding an idea I had this week, where while performing certain attacks, the character will be "weak" to others (so if im doing a high kick, a low kick will devestate me) to see if this affects the numbers.

Sorry for the delayed post, things are hectic right now, and look to e more hectic next semester.

Peace!

Sunday, September 16, 2012

Week 2


This week was dedicated to prototyping controls as well as an animation/state system for controlling the characters.

We had a couple control schemes going around over the past few weeks. While developing the control system, I was originally aiming for a scheme that we had discussed, but were not going with any more. This in and of itself was a non-issue, though it does show that this group depends more on docs than others that I have worked with. I will have to remember to check the documents on the project server periodically to make sure I am on the right track.
Still the control system is pretty easy to change and quickly modify. Unity's Input system is one of it's weak points, in my opinion. It is a deprecated system, that could certainly use some updates. It requires one to manually input each control, and in the case of having multiple controllers on one computer, one must put in each control for each controller, a painful, repetitive task, to say the least. Still, it is done. On the player controller side of things, I built the controls in such a way that the game logic is not looking for particular axis, but rather, the controller accesses objects that encapsulate the accessors to Unity's Input System. This indirection allows control input to be changed without having to access the code, most can be changed in the editor. This will help us modify and scale the controls.

As for the state system, I tried a couple ways of going about it, as well as testing the blending of animations into one another. I tried to keep in mind that we want to try making this work in a networked environment. I will be testing the feasibility of networking this coming week.

Basically the state system will control pretty much every action the player tales in the game. it will also will be responsible for controlling the animations and movement of the characters. The players will always be in a state, when they are knocked down, when they are idling, when they are in the air, as well as one for each attack that they are in. These states will control which state the player can go to from from that state, as well as what inputs take them to said state. For instance, the idle state will allow the player to jump, no other will allow the player to begin jumping, though I may change this to allow jumping to be queued after an attack(so if they hit up while an attack is executing, it will cause them to jump as soon as they are out of that state.) During an attack, the controls are being checked for an input for the next attack. At a certain point in the execution(lets say 50%) the input is not being queried any longer, and at a later point(lets say 75%) the state begins transition (blending animations together, etc). This delay between decision and execution is to allow some time for when we implement networking. This delay could be used to send the next state to the other player. This all theory, that will be put into implementation next week.The state system minimizes what data needs to be sent between clients, so hopefully, this will make it doable. At the same time, networking is certainly not necessary, so if we discover that getting the level of precision we need for a fighting game is not possible, we can scrap online multiplayer all together.

So yeah, controls, states, animation, and stuff. Hopefully the prototype I developed will allow Donny and Eric to test some of their controls and gameplay. We are trying both a brawler like SSBM, and a classic fighter like Tekken/Street Fighter, so we will see how this develops.

Saturday, September 8, 2012

Week 1

Anyway, this week has only had a little work on prototypes, sadly. A great deal of our time has been discussing and merging the different ideas we had into one set of mechanics to test together. My time was spent setting up our repo, and doing some quick work looking into the possibility of making our game in 2D.

Personally, after working on animation in 2D, I am personally against the idea.

It really depends on what Rebecca believes she can do in our time, whether she thinks she can do more art in 2D than 3D. It seems for the most part she cares little.

Working in 2D in Unity is interesting. I worked with it on Dungeons of Londree, where are environment was completely done in 2D art. Pieces of it were animated, so I thought I might be able to reuse the code for that. Unfortunately that code was made for a single animation, running through a single texture source. Parts were relevant, but it would need some modification (animation starting a certina frames of the sprite sheet, etc, instead of the whole sheet.

We were also working in M.U.G.E.N and looking at other 2D engines. Donny was looking into it for a prototype of our control scheme. It was not as easy as we had imagined and we spent a good deal of time looking at it's design (as its a famous 2D fighting engine). We looked at editors made for it as well as engines made in other languages (Flash, C++, XNA). There were many things these games had that I had not thought about, including State Settings, Sounds etc. I had thought about them at a game wide level, but one thing that each of these engines made clear was how deeply they were integrated into the animations of the character. Most had sounds played from individual frames, states set from individual frames, as well as having individual hitboxes set up for each frame. I had know about the hitbox issues, but looking at the massive amounts of data for the frames made one thing quite clear: doing 2D meant ALOT of work for the designers. Each frame needs a hitbox, and while many can have the same hitbox, they still need to be set up. This could be reduced programmatically, and I could develop a system to automatically add boxes based on transparency., but it would still need a designer to edit EVERY. SINGLE. FRAME. Not cool.

Working in 2D also means we cannot make full use of Unity's animation system, as well as their collision system. We would still be using the collision system to do the detection of the hits, but we couldn't just attach it to the fists, and wait for a collision, we would have to manipulate a collider box through script every time the frame changed, a huge pain in the ass in comparison, and I think it would be less stable, as if we are changing the size of a collision box and the new box moves into another, we might not get the same effect as we would get with a normal collision that happens with transform movement (so instead of translating something, we would be changing the size of the box, not sure if Colliders would fire their events in this occasion. We could also still use Unity's Animation system, but the frame changing would need to be programmatically created from an editor. It would be the same as any other editor. We would be unable to blend animations together unless we wanted to do a multilayer sprite(body is one sprite set, each arm is a sprite set, etc) which would be an immense amount of work and setup. The alternative of course is a single sprite for animating, but this is an immense amount of work for the artist, as every combination of states that would look different wold need to be drawn, and programmatically we would be unable to blend.


3D on the other hand, allows us to make full use of most of Unity systems. The Engine is built for 3D, and has powerful editor systems. 3D animations would brought in with FBX importer, and we could add sound effects to the animations through Unitys Animation Event System, as well as state changes( or at least some of them, as others would be dealt with elsewhere, like when they get hit or are stunned).

Collisions would be much easier, instead of hit boxes in 2d and changing properties continually, we could add simple colliders to the bones of the rig then check the collider around the fist for hitting another player. This could be set up once, as the colliders would animated with the character.

Animations can also be blended together both with standard blending and additive blending. Unity Animation Scripting. Currently this is all done through script, but I have some stuff from the summer that open it up to be edited a little closely, removing particular transforms from an animation(so we could for instance take the arms of a punch animations and overwrite another animations arms using it).

There are also many shaders that can work for 3D that would be unavailable for 2d(since there would be no vertex data), though there are also pixel shaders that we cannot use on 3d that we could on 2D.

There is the potential of using physics for the knock backs, instead of having them animated. This is not planned at the moment, but a thought to reduce animations, having the falling done through physics would reduce alot of the workload from Rebecca.

I also worked in Unity networking, getting the basic example working and looking at update times. While I am really interested in doing multiplayer, I won't lie, I am quite worried about networking. It doesn't seem particularly difficult, but in a game where timing is EVERYTHING, having event a fraction of a second delay could be critical.

All in all, we got alot of figuing out things done this week. We have more or less decided on going with 3D, we are prototyping the mechanics individually, and the game is coming into its own. Next week we are developing a 3D prototype of the controls on the xbox controller, and I am doing a proof of concept of the animation system for combos.

WWWWWWWWWWWWWWEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE.