Grapple Go | Sprint 6 | 11/6-11/20
Grapple Go is the game my team and I are developing for our Mobile Game Development class at Chico State. I hold the role of programmer and my team consists of Justin Culver as producer, James Songchalee as designer, and Sophia Villeneuve as modeler. Our game is a 2D side-scrolling infinite runner, similar to Jetpack Joyride, where the player mainly interacts through the use of a grappling hook to dodge obstacles and enemies. The player's goal is to try and cross as far a distance as they can while racking up their score to be later used to buy upgrades, helping them in their next attempt.
This post details some of my process working on the game during Sprint 6 of our development phase.
Challenges & Problems
| Pause Demo |
Sprint 6 actually turned out rather problem free, at least compared to the past few sprints. I didn't encounter any major challenges requiring me to refactor any core mechanics and I'm pretty satisfied with how my work turned out. I think this is partly to do with the fact that my tasks this sprint were comparatively smaller, more isolated, lower stakes mechanics. There was also a lot that I had to learn for my tasks this sprint, so most of the challenge came just from grasping and utilizing new concepts.
| Settings Demo in menus and during run |
For example, one feature I completed this sprint was a Pause and Settings menu, allowing the player to pause the game during a run and access settings, for now just overall game volume, at any menu in the game, including pause. I had never worked with any pause functions before, though it was easy to grasp, and I had limited experience with creating menus like these. In the end, I think the biggest challenge was organization, a common one for me. I wanted my Settings menu to be modular and decoupled so it could be plugged into any of the four different scenes we had fairly seamlessly, but I still needed my HUD to be able to activate it in a couple different ways. I spent a while deliberating and testing things out, figuring out what functions I needed, what scripts they needed to be in, what variables were necessary to execute the effects I wanted, and trying to reduce the number of one off events flooding my event bus. Ultimately, I got it to a point I'm relatively happy with. The Settings menu has its own Canvas in the scene with its own script that detects event calls to turn off its elements and sends an event call to reactivate the previous UI when the player closes it. This works quite well and fulfills the objective of decoupling; I think the only compromise with this solution is a somewhat dissatisfactory number of extra events.
As is a common theme with my work, I think these challenges definitely could have been aided by more thorough planning. A class map could have potentially helped me work out the decoupling struggles more easily, giving me time to brainstorm and problem solve before taking all the extra time necessary to repeatedly change and test my code.
Work Completed
I think I've done a much better job this sprint in terms of the amount of work completed. I think I had a much clearer vision of what I wanted for my tasks and I felt much more motivated.
The first thing I completed, technically at the end of last sprint but after the cutoff, was implementing the audio system. This was a somewhat frustrating one since Unity's built in audio management is somewhat poor. While I wasn't able to get too expansive of a system in place, I was able to put in some organization that allows me to easily, but manually, plug in sound effect calls wherever they may be needed in whatever script.
| UI Scaling Demo on Death Menu |
The next issue I worked on was fixing the UI scaling. When I created and implemented all the UIs, I designed them expressly for the screen size that I was testing them on because I simply didn't know how to scale the UI to the screen size and dimensions of whatever device the player is using, and I knew that I'd need to fix this later. I had done some UI work on previous projects and was under the impression that scaling would be a rather complex process, so I planned to visit another professor of mine during their office hours to get their help. However, between a myriad of reasons, I was never finding the time to visit them, so I decided it was getting too late and I just needed to try on my own. I looked up some resources and a tutorial online, and discovered it was a very simple fix utilizing a function I simply had no idea about before. While it took some adjustment, I was able to quickly implement it and tune my UI. As shown in the demo, the UI correctly scales to fit into most aspect ratios; the only problems are mainly centered on IPads, which we don't need to worry about since we're only developing for Android.
| Gun Powerup Demo |
Next, I worked on the final two powerups: a gun that continuously shoots forward and can destroy enemies, and dynamite that can be used to destroy terrain obstacles in front of the player. We were feeling that having all of the powerups be simple, uniform durations felt monotonous and a bit uninteresting, and our playtesters felt the same. Thus, we decided that the Dash and Dynamite powerups would both work well on a resource basis. Every time the player picks up a Dash or Dynamite powerup they gain one charge for that ability, allowing them to activate it once at any time they'd like. They start with a limit of three charges for each ability, but can be upgraded to five each. This encourages the player to use these abilities more strategically and breaks up the gameplay loop a bit more.
| Dash Charges Demo |
| Dynamite Powerup Demo |
While I implemented the resources for the Dash and Dynamite powerups I encountered the issue of double reading inputs for both the movement, detected by any on screen tap, and the powerups' UI buttons. I knew this would be an issue when I changed the input detection for the movement, and now I needed to fix it. I went through a lot of different attempts trying to figure out a way to ignore button press inputs for the grapple, and ended up learning a few more in depth things about Unity and specifically the event execution order. The issue was that my inputs were being sent via the event system which performed at the beginning of Update in the execution order. However, actions taken during the frame new inputs are being detected can't make use of those new inputs, they have to use the previous frame's input information. Because of this, I couldn't just detect if a button was pressed the same frame a movement input was detected because the button's input data was out of date for that frame. I first tried to solve this by waiting a single frame to use my grapple input, but this still caused errors. Next, I decided to try performing a Raycast from the screen position of the tap to see if the player had tapped on any UI. If they had, then the grapple input was then ignored. This worked rather well, but ended up having the unintentional consequence of breaking one of my teammate's work where they placed a tutorial as a UI element over the majority of the screen, preventing the player from moving almost anywhere.
One of the last things I completed was a visual of a chain between the player and the grapple that would tile and scale to fit in between the two. This was a quite interesting feature for me since it involved material rendering and material property blocks to customize the chain's sprite material by instance, which I'd never done before.
| Chain Demo |
Comments
Post a Comment