Tumgik
browndragon · 2 years
Text
Ludum Dare 51
Just so everything's out in public, I've been (very) hard at work at my utils library github.com/browndragon/util .
I highly recommend it; it has:
Topics: scriptable object UnityEvent implementations which let you fully disconnect pub from sub (plus extra tracing & debugging)
CLONES. Trace prefab asset<-->instance through to runtime; easy pooling-by-prefab and returning.
Isometric tile utilities like Billboard (works with tilemaps)
Infrastructure Utilities like a Coroutines (get Update ticks in a scriptable object), StaticAsset (automagically get Camera.main-like behavior for arbitrary scriptable objects), and various Invokers (wire up collision or mouse behavior through the inspector for testing)
QOL Fluent utilities like T OrThrow(T), bool IsEmpty(T), etc; Chain for implementing HashCode and IComparable, tern for implementing behavior trees
Math & additional datatype & inspector support like MinMax and all the collections (serializable map, etc); also PID controller and easing and generic math and generic conversion and...
Maybe even more stuff I've forgotten.
Anyway, please give it a go and LMK what you think.
0 notes
browndragon · 2 years
Text
What !@##ing angle are these assets
So you have a bunch of "isometric" assets and they didn't come with a precise spec sheet. What's their math?
maybe you need to figure out their geometry to tile them gaplessly in e.g. unity such as Kenney's isometric blocks (as I was trying).
maybe you want to billboard them in a world of true-3d assets, so you'll have a camera which "undoes" the transformation that created them. That would result in the camera staring at the sprite straight-on (pixel-perfect-baybee) while embedding them in a geometrically orthogonal world (consider the legend of zelda lean). As I also was trying.
A common, semi-ideal unity-style rotation would be (30°, 45°, 0°) (remember to adjust positions to make that work -- (-10, +10, -10) for instance?). This assumes a few things:
you're laying things out unity style, which is confusing, because it uses xz as the ground plane -- remember, Vector2 "xy" vectors will need to be swizzled so that vector2.y<-->vector3.z!
You're drawing videogame isometric, where the 30° rotation lets you draw graphics where the edges slope exactly 2:1, for ease of drawing & fitting the crispy corners.
Kenney, bless his open-sourcing heart, did not pick these numbers. Nor (afaict?) did he document the numbers he did pick. But we are not lost yet!
Each asset is 111x128px; since they're drawn cubes (not diamonds), you only want to do the math on the apparent top face. Luckily, the bottom corner of the top face is the mathematical center of the asset, so we know that the top face is 64px tall, which means its height-over-width is a suspicious 0.5765-- very close to 1/√3 (0.5773) when you consider the pixels had to be integers!
sin⁻¹(1/√3) is our old friend 35.264° -- this is a true isometric projection!
Ok hotshot, how about the resolution of the images? I've set up a camera at rot (35.264, 45, 0) offset (-10, +10, -10); I've set up a cube at 0,0,0 (no rotation!), and a sprite inside of that cube at (54.735, 45, 0) (remember we want 90-camera rotation here, since we're "lying down from vertical", not rising up from horizontal). But it's the wrong size; these assets aren't 111 pixels per unit! Waah!
Correct: they are 111*tan(35.264)=~78.4889 pixels per unit. duh.
One last fun math fact. Could I/should I just lean the sprite such that its topmost pixel (a drawing of a corner of a cube) and bottommost pixel (a drawing of a corner of a cube) lined up with the actual 3d top & bottom corners? What's the angle of that? why isn't it just 35.264 again, arcsin(1/√3)! Magical.
1 note · View note
browndragon · 2 years
Text
Unity PhysicsRaycaster isn't Physics2DRaycaster
Unity has 3 ways of doing UI:
IMGUI: Immediate GUI. The old, bad way of doing things. Still how you do in-inspector GUIs, gizmos, and debug (more or less). Draws everything explicitly to the screen frame by frame, vampirically hanging off of some other gameobject's OnGUI method.
uGUI: UnityUI; the old-new way of doing things. Still uses game objects, but child to a Canvas instead of embedded in the scene. But as a consequence of this, they work sort of like a parallel sprite-renderer universe. This added the EventSystem, which is why I've called you here today.
UI Toolkit: The new, not-ready-yet hotness. It's one root game object with an XML building language, and then an internal layout system that is built anew. It's fine. It's probably more efficient, and there's a lot of elegance to be had in designing your layout in a language, not a tool. doesn't have implications for non-UI architecture.
Okay, so the EventSystem does/can/should have implications for your game architecture. It's a little like the camera: clearinghouse for stuff, has its own internal discovery mechanism (dreaded static singleton state). It defines the bindings between input and its own well-known events ("north! fire! onHover! select! cancel!").
It pubsub-routes events (like onHover and onClick and so on) via raycasting. Receivers implement (e.g.) IPointerEnterHandler and its IPointerEnterHandler.OnPointerEnter methods on your game objects' MonoBehaviours; this is much more legible than old-style magical optional MonoBehaviour.OnMouseEnter methods.
But somewhere you also need to say "this is the raycaster for the layer called UI and it goes first (probably a GraphicRaycaster-must-attach-to-Canvas which is weird); this is the raycaster for the Default layer and goes second (say, a PhysicsRaycaster-must-attach-to-Camera which is confusing)" or whatever. It wasn't working for me. I've done wacky shit to the camera perspective matrix (isometric haxx), and so I assumed I'd done something there to confuse it.
Only now, reading the code, do I discover/remember Physics2DRaycaster -- the much more likely reason that my BoxCollider2D objects wouldn't be receiving clicks.
Only now, reading the code, do I discover/remember Physics2DRaycaster -- the much more likely reason that my BoxCollider2D objects wouldn't be receiving clicks.
God, for how fragile this stack is, and how many revs it's gone through, and how implicit all the documentation is ("Maybe these interfaces are only SUPPOSED TO work for canvas elements?!"), etc etc, you'd think I'd learn.
0 notes
browndragon · 2 years
Text
Dotnet Collections
I've been working on some Things. More posts as they release. But anyway, I've been particularly indulging in infrastructure.
My hero (for purposes of this post) is perhaps Josh Bloch (though I'm sure I'll bring shame to his name). I really love the java collections API. I'm really sad to be operating in dotnet land without it. So, uh, I wrote it.
Here are some complaints: Dotnet dictionaries don't serialize in Unity. Dotnet dictionaries and sets don't support ordered traversal, so it's harder to write tests. Dotnet doesn't natively support more complex types (particularly bimaps and multimaps).
So! I wrote them. Let me know if they help you!
The core types (Set & Map) are es6-style sets and maps; insertion order enumeration. They're implemented on a linked list, so I haven't tried to make them efficient per se (lots of little allocations). The unity code looks like it doesn't have tests but don't be fooled; that's just me writing all of the code I can in dotnet land to avoid dipping into the unity test framework.
Which is not to say they're well tested; caveat emptor.
0 notes
browndragon · 2 years
Text
Tumblr media
I'm not dead.
I've switched games, technology stacks, and various other things. The current project is My Name Is Mud: the turn-based story of a raindrop, with the idea being that you go on a little adventure in the dirt before evaporating, with a replay mechanic that lets you take advantage of erosion.
Here we can see a desperate fight against (home brewed) gravity.
0 notes
browndragon · 3 years
Text
The Problem With Arcade Physics
TL;DR: I have a fix for arcade physics, but it's disruptive, so I'm not trying particularly hard to get it upstream. Let me know if I should! My asks (etc) are open!
If you want it, it's available @ [my github](https://github.com/browndragon/phaser/tree/solidArcade)
Phaser comes batteries-included with arcade physics, an AABB basic physics engine. So of course I'm building fidough, my ridiculous boxes on conveyor belt game with it. Which is insane. It's fine.
But I hit a problem: naive conveyorbelts didn't work:
Tumblr media
Okay. So why does that happen?
The code was simple:
function overlap(sprite, tile) {
const {x, y} = this.offsets[tile.index];
console.assert(Number.isFinite(x) && Number.isFinite(y));
ticknudge(sprite, x, y);
}
function ticknudge(sprite, x, y, still=.01, nudge=.001) {
sprite.body.x += x;
sprite.body.y += y;
sprite.body.updateCenter();
}
So what gives? that looks right! It isn't.
_dx and _dy: why?
The old code had a simple flaw: it had multiple representations of the same data, and needed to remember them all. The core metaphor of arcade physics separation is to determine the motion of the two intersecting objects and to restore them roughly antiparallel to the direction of motion.
This is deadly: motion is underspecified. Antiparallel isn't hugely specified. Motion brings in the notion of time.
Obviously, the old code had `velocity` concept, it is a physics engine. Velocity is tricky. In a physics engine, it represents the continuously updated per-frame motion component, but it is obviously not the actual total amount of movement that was definitely executed. For instance: if the physics body has a lot of drag and perpendicular acceleration and lots of colliding objects, its actual displacement over the last tick (upcoming AND over the last frame!) isn't necessarily the velocity times time.
This is a problem for me. Remember, my whole schtick is walkways, which are effectively sticking an object in a constant velocity frame of reference relative to the camera, which for one reason and another isn't ideally represented with careful force-and-drag (or absolute max veloc), but instead a per-tick direct modification to colliding objects' velocity.
The first advice was to ensure that if I modified `position.x`, to modify `_dx`. That's definitely good advice; the way that Arcade Physics detects motion is through that _dx thing; if I modify position.x of a completely still object, it won't reflect in its actual motion, and so it won't reject from the walls. QED.
Okay, so let's grab a snap with the above code modifying _dx & _dy whenever we modify position.x and position.y. Spoiler, it still doesn't work!
Tumblr media
This is the `embedded` logic biting us, deciding that two colliding objects without motion can't be repaired. At first, I thought that Phaser was wrong when it estimates how much an arcade body has moved, because it uses velocity ("per-tick added motion") instead of a continuously evaluated _dx & _dy ("per-tick observed displacement"); it does so much math already that I just deleted _dx and _dy in favor of subtracting previous position from current position. I also gave it enough intelligence to extrapolate a per-second displacement from the observed previous-tick displacement.
And do you know what? It still didn't work!
On passing data between ticks
The logic was simple. The physics algorithm is:
For each body: Start the frame: copy bounds and transform information from the game object back into the arcade body
For each body: Start the tick: calculate next position from velocity, drag, acceleration, etc
For each collider: Execute it! (and by coincidence, I'm doing object intersection first and walls nearly last)
Goto 2 if there's more time left in this frame
For each body: End the frame: Push position information back to the game object.
So what's missing? The only logical time to snapshot prev (and calculate dx) is on step 2. But my treadmills execute in step 3! So their motion is *always* forgotten by this algorithm, since the snapshot captures the value they'd written; any objects that are stranded within walls seem always to have been there, since the current and previous position are equal as an accident of this algorithm.
The Fix
We truly don't need dx as a body property; it's just the difference of current and previous position, so we can throw that out. But we need to capture previous position in such a way that the velocity in the last tick and the wall rejection, tread application etc are included! The obvious fix is to calculate a snapshot just *after* calculating position at 2, but to put it in place just *before* calculating 2 on the next run through. As a result, the displacement on each tick is the previous tick's collider adjustments, any updates from the game object position [for ticks that span frames], and the current round's velocity update. There's a risk here! We actually sync out to the game object a value which is never actually used as a previous value in the physics engine. This doesn't seem to be a problem in practice, but perhaps there are exotic collision schemes which pose a problem here.
This version no longer uses velocity as an input to collision or separation, preferring to use calculated-on-demand dx & dy values. Everywhere. This means that everything "just works":
If you modify your game object's position during e.g. `update`: the body will be set to that position during its next step 1 as before (and then have its velocity added), but newly, the body's prev during update & collision will be the value just after the velocity was added
If you modify your body's position during a collider: the body's prev was snapshotted before any colliders ran, so that new position will count as motion for collision and embedding logic.
So, the ridiculous finished project: fully generic relative motion on walkways in arcade physics. Whew!
Tumblr media
1 note · View note
browndragon · 3 years
Video
Liquid Tiles and State Machines
Hello again! I got stuck in a rabbit hole, but I had fun.
One of the things I built is strictly less fun than the other, so I'll lead with it even though it's not what you wanted. The other is the pretty video above.
State Machines
TL;DR: See https://www.npmjs.com/package/@browndragon/sm .
I got annoyed that the big state machines in javascript were too verbose for what I wanted, so I wrote my own. I think I've already written this post, but this time it's even better.
Each "state" (inconsistently called a node) is a function that returns another node. You load some initial node up in a state machine (here called Cursor). The Cursor invokes its current node whenever you call next (and whatever it returns is the next node). However! It's invoked in the context of the Cursor itself, so you get some interesting bells and whistles automatically: this.here is the current node, for instance -- normally it's hard to get access to that in javascript, but not so here. Since they're each function objects, they're less prone to object equality stupidity of certain sorts, and more prone to it of other sorts. There is no method to predeclare the set of states that exist, so your states can create states (by returning inner functions for instance). These are all features I thought I'd need ;).
You can write nodes that assume they're useful for their side effects, or nodes that assume you'll examine the state machine's here. Cursors implement the iterable & iterator interfaces, so you can use them in loops and such also.
However, for more power you need the full Machine (which extends Cursor). This does things like track state for every node (which is why the nodes are not called states...), with advanced features like history, traps (so that if a node returns undefined it can be rebound to actually go to handleUndefined()), and similar. This makes them O(n) in the number of nodes (and indeed, O(n) in the number of calls to next), but sometimes that's the featureset you need!
Give it a try. Or don't!
Liquid Tiles
TL;DR: The demo above, but the code isn't published anywhere [yet].
I kept playing with dough connected by springs, but I think I'd need to do tile deformation or shader tricks to make the dough look good. As written, the arbitrary offsets allowed glue tiles to shift, leaving gaps. Ensuring coverage would require stretching the tiles or having additional backing color. Or: a change in scheme.
Dough is just a really thick liquid, right? (Over a long enough timescale, aren't all solids?) So how would I model a liquid? I might do it with freely chosen blocks connected by links (the current dough system), but that would likely be too chaotic. Instead, I'd probably split the liquid up into regular domains and analyze each domain. So I did that! Liquid tiles are the result, a system similar-to but different-from phaser Tilemaps, but providing a similar grid-based interface to the world.
First, the data structure
I'm continuously at a loss for high quality datastructures, so instead I write my own low-quality ones. I needed a store of tile information -- unindexed integer 2-tuple keys, arbitrary1 values. Easy enough; I wrote a dense one which uses an allocated array of fixed size (so that array[y*width+x] is the value for (x,y)) and a sparse one which uses fully arbitrary (x,y) pairs and stores points under their stringification. As I write this, I realize that these data structures are not so very different in javascript, where arrays can arbitrarily allocate keys, but what's done is done.
I called the keys in this datastructure x,y tuples, but that's not entirely true: they're really u,v tuples; I wrote a little tilemath class to hold the geometry for mapping between an XY space (like phaser) into the UV space of the tiles (like the tilemap indices) and vice-versa. I am pretty sure it still has some ugly edge effects (tiles do nothing to fix the default anchor(0.5, 0.5), potentially favoring the top/left sides! etc), but it's functional by visual test. The naming scheme (xy space vs uv space) provides very sensible method names -- u(x) is pretty unambiguous. There's no obvious uv analogue to width and height, so I settled for uCount and vCount, which is what it is.
Second, a dip in the Pool
Obviously, we need a Pool of tiles (where tiles are just managed instances of Image, Sprite, or subclasses). A Pool is obviously a Group2, providing mechanisms to manipulate its managed contents -- putTileAt and removeTileAt for instance. But then the next question: what are you putting in these tiles; how are you passing the grid-based information which you need to pass to them into them? I say that Tilemap got this right, you're passing them a tileId (whatever your arbitrary first parameter to putTileAt is); I say that Tilemap got this wrong in that it knew that tileIds were lookups into arrays which were preregistered along with spritesheet geometry etc.
Everything else: mappings and shadows
Anyway: I created Conformers to address the problem of how to map tileId onto actual asset. Conformers are functions which take a tile entry (a gameObject, uv coordinates, tileId, maybe other stuff) and makes the game object conform with the other parameters. A simple one can setFrame(someTexture, someFrame) by just looking the tileId up in a big array; a more complex one might play(someAnimation) or do wangId calculations or whatever. This is also a great place to put state transition logic, since you can detect whether this conformation is a change from a previous state, or a put for a state that the tile was already in.
Okay! Now we're ready: since I know I want this to follow dough blobs around, and the doughblobs are acted upon by the rest of the physics system, I needed some ability to have a sprite "cast" an effect into the dough tile system. I called this a ShadowPool (which extends Pool extends AutoGroup extends Group). Every element of the shadow pool's WatchGroup casts a shadow into the pool made of tileIds; each tile's tileId the bitwise or of its place within the element's boundary (so for instance the upper left corner of an element's boundary is 0b0010, the bit for the lower right corner set.) That, at long last, is what the video above is showing, with fancy transition effects.
Next?
The animation of specific dough elements remains tricky; doughjiggle is still going to look bad under this new quantized regime, even as the interior of the dough looks better. But now I can emulate slugs, and spilled paint, and footprints, and other mass nouns without feeling like I've got to pay the cost of a full tilemap. Indeed, since tilemap layers render in one pass, using a pool even for walls might let me do the fabled "figure in front of bush & behind tree" 3/4 view I've been after this whole time. Certainly the ability to "layer" collisions by material type is very valuable to me, and missing from the current tilemap classes.
I'm now imagining a hybrid scheme: dough is drawn as nodules (free moving spheres of dough with weakly drawn borders) on top of a ShadowPool which draws the base of the nodule, thus the outline of the dough group (wang tiles with strongly drawn borders). Dough regions which quiesce could remove the nodule and mark the tiles from the shadow pool as "permanent", so that they can take over the nodule's mass. Animating the movement of the base can add more detail to this, since it can theoretically hide the quantization by masking portions of the tile and sliding it out in the (known!) direction of change. For instance, if the tile had been undefined and now has the bottom right set, it is clearly sliding in from the bottom right. This will cause slightly strange initial effects (of course), but edge effects are to be expected.
Fast moving dough would be represented as nodules (large borders). Slow speed dough would be thin-border nodules on top of a ShadowPool, sticking-and-unsticking the dough and an unstable equilibrium. Stopped dough would be pure ShadowPool entries. Dough spring would be provided via interaction with the shadowpool.
I mean, arbitrary at first. Obviously they're gonna be tiles. ↩︎
As an implementation note, each Pool is actually a singleton group; that's just more convenient to my way of thinking about these things. ↩︎
0 notes
browndragon · 3 years
Video
Improved petals
Covering the joints between tiles is... maybe better. It's confusing. Attached above, a much more lumpy-looking wad of dough with somewhat tuned spawn/despawn parameters. Better, less flickery, but not all the way perfect yet. Still, worth committing at this stage.
A few insights. One of the reasons there was so much noticeable edge flicker was because the edges were being drawn by the petals. If we assume instead that the dough has to be a circle, and the petals just block parts of the outline, then this works a lot better -- but leaves the corner petals doing significantly more work, and very likely requiring them to overlap their peers!
The horizontals are pretty unsatisfying with this version. I think it's because the opposite number takes some time to cohere because the choices are being made separately; the more picky we are about connecting (, in order to cut down on flickering), then the more time it takes them to fake being two halves of a whole. The velocity calculation can't be helping with that, as the bodies become more and more encumbered.
The diagonals were lazy, there's definitely a better way to draw that.
All in all, this suggests that to get petals working right -- really right -- probably wants a bunch of things.
Better guarantees about pixel overlaps. If each petal is guaranteed (say) that it will only be drawn within 4 pixels of its owner and 4 pixels of a non-owner, then the art style can make accomodation on its outline better. This is the fault of the spring system, which is (currently) loosey-goosey about positions, stresses, etc. I tried stiffer springs, and while there might be some benefit there, it's pretty rough. Similarly, the when-they-merge-and-when-they-split might help with tuning this, but it seems to introduce a LOT of churn (and it's touching the display list, so it's not free).
Perhaps using corners is the problem. The corner connections are the ugliest art, and in any case, the thing I want to have happen is to have each face connect; if my dough ends up with internal corner voids, it might be worth just special casing that (?). I'm nervous of where this ends up, though; wang 2-edge tiles result in visible internal corners, because they don't know if their neighbors will be set. Using the blob set is the obvious solution here since it does indeed know this fact. Grumble, though -- it doesn't feel low asset-y, and significantly raises the generic cost of this technique.
Perhaps I just need more overlaps between my joints. This is hard to arrange, since the joints are the same size as the blobs they're joining; using both corner and face (and center) tiles, where the face tiles are a 2-edge wangset. If they're all the same size, then this guarantees 1/4 overlap on each side from corner-to-face, which is very probably enough slop to accomplish our goals. Maintaining these connections would be doable, but a complex pain.
Use a shader for outlining. The internal problems are because my doughblobs have a 4 pixel outline, but if they didn't and I used a shader for outlining, this would just go away, and probably look a lot better too.
This is all thinking out loud, but there's no straightforward way to accomplish my goals. The exterior flickers look pretty good, but now the interior flickers are probably a showstopper.
Frustrating!
0 notes
browndragon · 3 years
Video
Now with deformation!
Rather than actually work on the game, I got sidetracked by two problems:
1) What is the best way to draw a dough petal? 2) How should hysteresis work?
My fundamental conceit is that dough is a central "blob" drawn without outlines, which is what physics acts on, and a corner-based lattice of joints or (in this post) "petals" which connect the blobs and which do have outlines drawn on them, so that they can represent the edges of the stuff. These petals are where the springs are located, one spring in each corner, joining that corner of the petal to the center of the blob (and maintaining backreferences from the blob).
So in reverse order...
Hysteresis
In this system, the hysteresis & deformation now occurs because the collider tests for velocity as well as position, and only joins together two blobs if their motion suggests they'll continue to be significantly mutually impinging for some time in the future.
I happen to have done this per-axis, it probably would be easier to follow if I'd just done it with all the square roots.
And I changed the per-frame spring application logic (ugh, huge bug, I should move it over to a per-tick application...) to sever springs which grow too long before applying their impulse.
Since these two numbers are not the same, there's some very jarring flickering effects where two nearby petals attempt to intersect before their extension causes them to re-sever I could probably tune away by modifying distances; I could also correctly handle axes, and privilege a given choice (do/not merge joints) by assigning lifetimes to joints and taking that into account on their stretchiness. You can see some of that in the video.
Art
There's a not-very-deep but a little-bit-deep question about how to draw the art. Imagine the northeast corner petal (so that only its southwest corner is shared with a blob). Should the petal cover only the southwesternmost 4 pixels of its art? The SW quadrant? A 3/4 mass centroid? nearly the whole thing, just drawing the far border?
It has to do with where you pin the petals, and how much overlap you design them to have. If they jiggle as much as mine do, because they're spring based, then perhaps they should actually just be drawn as dots, piercing holes in the solid blocks' borders. I'll put up a version which does that next.
1 note · View note
browndragon · 3 years
Text
Phaser tile + arcade shenanigans
I'm actually designing fidough levels now, since I've got autogroups (& colliders), dough springs, treadmills, 2d models, and other fanciness. I've open sourced all of it, reach out if there's something you want that I haven't already made available or want some help with.
I wired it all together and... it didn't work. I could either have map collisions working or else have intersprite collisions working. Everything I'm doing is sane, so the bug is either in phaser's implementation or else I've misunderstood something (latter more likely). But it's easy enough to test for phaser bugs, so I did (link to the multimedia tax). The problem is a programmer understanding one.
What I learned along the way (no I haven't fixed the underlying issues yet)
Tile collision faces are mess(ier than you could possibly imagine)
Say that you have a map composed of tiles like floor, wall, spike, coin (for a variety of reasons that's silly, but whatever), etc, and of mobs and pickups like player, enemy, coin (again!), maybe bullet, you know, that sort of thing.
The graph of interactions is complex. Neither players or enemies can walk through walls. Players can step on spikes but they hurt; for simplicity, we might say that enemies simply cannot step on spikes. Players can step on coins (and flip them into floor), but we'll say again that enemies treat coins as solid.
Phaser supports this super duper messily. At the end of the day, the internal structures are boolean "does this block movement" and maybe a callback. Which means that if the answer to blocked movement is complex -- you have characters that can swim and fly and so on -- it's going to be a bit uphill.
There's actually an example on the included game. If you line yourself up with the spikes along the bottom edge of the map, you can walk through walls. You see, phaser checked whether it should separate you when you entered the spikes (they're solid... to robots...) and the spikes said "no, let them in, we'll just hurt them". But Phaser removed the internal boundary between the southern edge of the spikes and the wall -- because the wall is solid too! -- and so the player was effectively embedded in the wall the moment they touched the spikes!
The answer is annoying; disable recalculateFaces for anything you know you want to treat as a separate crossing domain. So for instance in this example, I wanted to prevent robots from stepping on spikes and players from crossing over from spikes (ouch) to walking through walls; by doing this I accomplish that. I think there will be micro-jaggedies as a result. Alas. Ai me.
Phaser tilemap callbacks basically require double dispatch
You can associate one callback per tile type -- not per tile type per colliding group, note. So since your one tile layer likely has tiles that different units respond to differently, you have to do double dispatch from the tile callback back into the colliding instance. Due to... reasons... the callbacks are always called (sprite, tile) (even if your collider registered them in the order (layer, spritegroup)), so it seems very likely that the tile callback is always literally (sprite, tile) => sprite[`on_${tile.type}`](tile) (or similar).
What should you do?
Note, I haven't done any of this.
A lot of people suggest you have one colliding layer and n art layers. I've resisted that for a while, but I think I have to give it up. If you had one colliding layer per movement type, you could absolutely have water and walls block walking, floor and walls block swimming, and walls (only) block flying. Building that out without a separate layer per collision type seems hard. You can even derive that on the fly (generate the additional colliders dynamically off of the loaded map's tile types).
0 notes
browndragon · 3 years
Video
Shooting Doesn't Help
Merely a unit test to the singletongroup library, I made a little top-down adventure game run & gun blaster.
You can check it out https://browndragon.itch.io/shooting-doesnt-help and https://www.newgrounds.com/portal/view/778998 (that's right, I've started to play with newgrounds).
0 notes
browndragon · 3 years
Text
An update on singletongroups
Continuing from Automatic Collision Groups, I've updated and provided an npm-able external package and rethought the actual names and examples; this post supercedes that post.
What is this?
When trying to publish "Automatic Collision Groups", the biggest problem I'd run into was the name. I was trying to augment a Sprite with the behavior of a Group and of a Collider; I tried a few different names (is it a Club, a set of interests and ideas that has a formal existence beyond its current membership? A Polity? A Gang?), but it was always messy.
The existing logic worked, but to pull it into a library, I needed to discuss what the moving parts were, and I hated it.
Enter SingletonGroups
The big change from when I started this project to now was the explicit addedToScene() method new in Phaser 3.50. That fixes a lot of the problems I'd had around initializing sprites, and so it allowed me to stop treating the thing I was writing as a coordination point, and instead just focus on the object/group/collider organizational principle.
Well! The easiest solution for that is to stop focusing on the Sprite and to instead focus on the Group. So that's what I did; the newest version is just a scoped singleton group, scoped at the level of the scene.
What do they do?
The S[ingleton]G[roup].Groupis a subclass of Phaser.GameObjects.Group (and SG.PGroup of Phaser.Physics.Arcade.PhysicsGroup). Its static SG.PGroup.group(scene) method does an atomic cache lookup on the scene in a symbol-guarded class-keyed map for the group class itself; if it's not there, it uses the class' one-argument constructor to construct a new group and returns it. This also automatically subscribes to scene shutdown and cleans itself up.
You can of course override SG.PGroup.install(scene) to do additional post-scene-addition method calls (like: tuning physics), or SG.PGroup.constructor(scene) to pass additional parameters (like: config options). I kept the same API for PGroups as I'd already had for identifying collisions -- PGroup has static methods static get colliders() and static collider(a, b), overlaps and overlap, and optional process; on static insert(scene) (which remember is called once per scene on group, the necessary colliders are automatically created.
This does nothing to modify phaser game objects. For that, we add the method SG.induct(gameObject, ...sgs), which just adds the gameObject to the instances of each named class SingletonGroup in sgs, and SG.Member(clazz, ...sgs) which provides a safe-for-multiple-call-of-Member subclass of clazz which automatically overrides addedToGroup to add instances of the subclazz to the groups identified in sgs.
Whew!
WHERE ARE MY VIDEOS AND PICTURES
Next post. The unit test I wrote was fun enough that I posted it online.
0 notes
browndragon · 3 years
Video
Dragonbones updated to 3.52!
So, I mentioned last time that there's a few competing skeletal animation libraries. Wanting to be a good open source citizen, I've nursed dragonbones into working with phaser 3.52.0 (head as of today).
The tests all pass -- uh, mostly; maybe I got some linear algebra wrong somewhere, left as an exercise to the reader. But they pass well enough that everything still broken looks like it must've already been broken, since it's really quite hidden.
It was kind of interesting; along the way I discovered an old feature request for phaser to natively support skew, so, uh, I did it:
Tumblr media
(what’s that? 4 standard `Phaser.GameObjects.Image`s with tweens causing them to, from left to right, `skewX`, `skewY`, `rotate` & `skewX`, `rotate` & `skewY`? Madness!)
I guess I’ve gotten a little offtrack wrt my own project, but open source is its own reward.
0 notes
browndragon · 3 years
Text
In which I narrowly avoid writing my own animation engine; computer graphics as art media
Work on fidough took a break around the holidays -- there were some vacations, some injuries, some physical therapy, a lot of television, etc. Frankly, I had finished mucking around with physics (, at least well enough for now), and was intimidated by my next task:
Prototype spriting.
Tumblr media Tumblr media
Multimedia tax:
The goal: Customizable sprites
As a proof of concept, all I want is the ability to put differently shaped dog tags on my dogs. Really, what I want are differently shaped muzzles, differently shaped tails, spots or no spots, etc.
The obvious way to do this is composition: draw six different heads, three different bodies, twelve different tails. Make sure every combination you want to have can blend together and you're good.
If you add animations, obviously the number of assets you'll need to draw goes up; if there's a (face only) 2 frame barking animation, every one of those six heads will need 2 frames of bark (twelve more frames); if there's a body-only 6 frame walking animation, then all three bodies need it (eighteen more frames). Total of 40 frames. That's a huge savings over the naive six heads x three bodies x twelve tails = 216 combinations x ( 2 frames of bark + 6 frames of walk) = 1728 frames.
Heck, if the legs are animated separately from the bodies and the resolution is low enough, I might even be able to shave some more work off of the 40 frames; animate a leg cocking once and play it for all 4 legs in sequence, for instance. And by the way, nothing stops me from providing a body-based bark animation to go with the heads', to make everything look better. It's still cheaper than having to provide the full combinatorial set.
But.
This whole scheme relies on "mak[ing] sure the combinations blend together". What does that mean exactly? For instance, what if the dogsbodies are different lengths? Remember, I have the dachsund and the terrier both; even if the dogs are the same height at the shoulder, the distance between the head and tail varies widely based on body type. And if you imagine the dogs running, you can imagine that the position of head and tail might vary for every frame of the body animation -- a dachsund's wavelike run puts the head and tail at different heights, while the terrier might do a whole-body hop.
Failures
The Phaser library provides containers (which I've never actually found a use case for, yet). Maybe perfect: I say to myself, I'll have a sprite (or even image!) for each of the head, the body, and the tail; I'll put all 3 in a container and manage their relative positions, movement, and animation frames in a unified way at the level of the container.
Attempt #0: it's implicit (, and fixed?)
This is what the liberated pixel cup paperdoll assets do. Every weapon gets an animation for 4 frames of attack, and the locations of the wielder's hands are fixed; every hat and glove and chest piece and pants and character model and hairstyle gets an animation for every frame of every action, all of them being images of the same size (so that their relative positioning is their absolute position, a function of their position within that standardized frame).
I think this would work for many games, but not games where there's any sort of variation in size. In addition to providing dachsund_body.png and terrier_body.png, I need to have something like dachsund_run.wtf and terrier_run.wtf -- the first represents the actual art asset of a dachsund vs terrier (just so long, so furry, so brown; a single image); the second represents where the actual heads and tails and legs go at each frame of the movement. Maybe it's just a json file of offsets and tweens and animation frames, but still.
Attempt #1: animations as point clouds
What we need to do is take a lesson from 3d animation and separate texture, model, bone, and animation. 3d models are meshes painted with a texture; 2d pixel models omit the mesh and just provide the texture. So maybe all we need is a second set of assets, the bones-and-animations, where we provide a spritesheet of animations where each pixel is the location of a specific bone (you can imagine that color 0x000001 is the first bone, 0x000002 the second bone, 0x000004 the third bone, and so on; 48 bones ougghta be enough for anyone). Then just provide a skin guide: a json file that says bone 0x1 is head.png and bone 0x2 is arm.png. Oh, but one of the strengths of this was that left arm could just be right arm but flipped -- so we need somehow to record scaling (maybe just by -1, to allow axis flips?) & rotation (maybe just by multiples of 90 degrees?) in addition to translations.
Oh, and actual image transitions: in frame six of the idle animation, the character's eyes should blink, or perhaps the colors should shift on a metallic dogtag.
That's just so much data to pack in per-frame; maybe the idea of using spritesheets alone for animations is just not going to work; all of the animation data (transform, scale, rotation, texture) needs to live in some structured format. And in fact, that's what phaser animations already do, since turning a spritesheet into an animation requires additional metadata like timing.
Okay, I say to myself, time to write this all up; we'll just craft all the bones and animations and skins by hand, as will everyone who tries to use our open source library.
Haha no; Rigging is the answer
I'd heard a lot about rigging & animation for 3d models, but I was less familiar with state of the art for 2d. It's a little wild west, frankly.
Spine. $70 for pixels-and-bones, $300 for mesh deformation. The runtime is well supported in phaser & active, and multiple people associated with the project speak well of it.
Dragonbones. Free, though you do have to navigate the program in Chinese before you can select English; there's little anglophone community to speak of. The runtime seems a bit abandonware; I'm following MadDogMayCry0's fork since they donated some fixes on top of a slower-moving head.
Also ran: Creature. $50 for pixels-and-bones, +$100 to get more & better special effects. It looks gorgeous, however its phaser runtime is two years old (the docs are three!), still references pixi (the phaser runtime no longer does), etc. But really does seem to provide a lot of "AI" to make your assets have verisimilitude more easily!
Also ran: SpriterPro -- $0 for pixels-and-bones, $60 for... well, different features. Not clear to me whether it even supports mesh deformation; the phaser runtime is 3 years stale. I assume it's dead?
So obviously I should just try Spine, but the above multimedia was produced in Dragonbones. It's fine.
HOWEVER THE RUNTIME IS BUSTED
TL;DR: Phaser 3.50 changed the world, and now they're broken.
It's not entirely their fault; phaser 3.50 changed the world. But it looks like the last phaser update was 3.12, quite a while ago.
That's no good. So really, the only supported animation system is Spine. Guess I'll shell out $70 and see if it works.
0 notes
browndragon · 3 years
Text
Lessons learned from the untitled bouncing game
Following from the previous post. What'd I make? Why? What'd I learn?
Everything is terrible and broken.
I have badly misunderstood arcade physics
I think this is foundational stuff, but somehow I never got it, so I'm catching up now. I've been very confused for a very long time now about how you would model the really basic thing of "a guy walking on a treadmill".
So here are things that I knew:
Arcade physics is picky about the direction in which you're moving. Maybe phaser 3.50b13 fixes it -- there are related changes. (edit update: it is not yet fixed. This is bad advice.)
Arcade physics body properties are Newton's First Law compliant. So if you set the velocity, it sticks around until you unset it.
Yet: if you want to move a character, you want to modify its velocity to include some walk component in the direction they intend.
Don't modify arcade body position yourself -- the first thing each physics update does is sync those back from the game object, so they get wiped out.
The epicycles
The initial way I was solving this was through a class that could receive per-frame impulses, which it would subtract back out immediately after each physics run. This was crazy because of drag, and I needed drag because of springs (otherwise they oscillate forever). But if you're walking to the east at 100 units with sufficient drag to lower that by x, the walking system needs to know the value of x in order to subtract back out 100-x and not the raw 100!
It got bad. So I asked for help on the discord.
KISS -- but no simpler
The solution in Phaser 3 is a bit context dependent. But it is doable!
How to move a character by keyboard: If you want your character to walk east, you can directly modify the game object x and y. This is because in update (when you're reading the keys), you're not in the middle of a physics run, and so if you modify the body it'll get wiped out.
How to move a character with a treadmill: Surprisingly similar. In your collider, directly modify the body's positional parameters (since this is now during a physics run).
This doesn't really interact with the physics system! But surprisingly, it seems to works well enough -- the forces get re-introduced when the physics system attempts to rectify things. You now must use drag (etc), since you can't cancel out movement just by writing in movement of your own. But that's fine, at least as far as I'm concerned.
You can walk on treadmills and everything's fine!
Except, of course, that you can walk through walls.
When your arcade-physics velocity is 0 and you teleport from space to space, the arcade physics engine gets confused; it assumes that two velocity-zero colliding objects are mutually embedded and gives up. This is fair if you reset the body into place, but if you don't call reset, that feels like a surprising result to me. I'm going to see if a small upstream patch can fix this.
The Arcade Physics logic already separates the notion of "velocity" (amount of dx to give each body on each future physics step) from "historical dx" (difference between last position and current position). Since we're taking stutter-step teleports here, that should be all that we need -- since those all collapse into the historical dx, just, mathematically.
However, there's still something going wrong; it continues to report that my dx is 0 for my teleporting friends, even though I (thought I had) modified all of that. My guess is that somewhere I'm missing a sync-from-game-object which is clearing dx; the existing logic presumed that dx tracked per-step changes, but threw everything out at the start of each frame. Since character movement is input at the top of each frame, that seems like a logical place for this to break.
So the fight goes on!
I am still not sure how to google this stuff
You can get degrees in game design; I'm sure there's a lot of resources out there on "how to model people walking around". But quick google searches (, extensive google searches,) didn't find anything because the modern web is drowning in noise.
The fight goes on.
0 notes
browndragon · 3 years
Video
Untitled bouncing game
A followup will analyze What I Learned, but for now: please enjoy this ridiculous bouncing game. The ball follows the mouse. The rectangles eat your score. Bouncing stars into rectangles destroys them (they're worth more the more there are on the screen, and by the way, they spawn faster as you go).
The center of the screen is a rotating treadmill (half the size of the screen in each dimension); every couple of seconds it changes which way it's pointing. It's just subtle enough to be completely annoying.
Itch.io link
0 notes
browndragon · 3 years
Text
A thing I really should have realized about webpack
TL;DR: Your webpack config can be an array; if you were trying to do what I did, it needs to be an array.
For the longest time I was trying to set up a project so that I could name a file foo.main.js and automatically generate an entire output product from it, so that I could "unit test" my game along the way. See here for my previous attempt and a more full description.
The advice I had been following was to set up multiple outputs, one output per input. That worked fine for pure js, since it's producing one foo.bundle.js file per. But it fell apart as soon as I tried to upload assets to itch.io, because I had to manually copy the assets around and the public path in dev (localhost:8080/) was different from the itch cdn (randomhost.com/html/randomversion/), and and and.
There's a better way.
The fix is ungoogleable
I don't think there's a good name for the pattern I wanted, since it's not producing a single output 'bundle' js file (that's how I found the first solution described). It does have multiple entry points, but afaict not in the way usually described.
I want one complete, hermetic output per input entry point. This is an off-label use for webpack, since usually they're building an entire site and reusing cached assets and javascript is the whole point. I'm building unit tests, and want to ensure each unit test can be uploaded to itch, where they'll be isolated from each other.
AFAICT there's no name for this.
GIVE ME THE FIX YOU GODDAMNED RECIPE BLOG
In your webpack/base.cjs file, you can simply:
const glob = require('glob'); const path = require('path'); const webpack = require('webpack'); const {CleanWebpackPlugin} = require('clean-webpack-plugin'); const entryArray = glob.sync('src/**/*.main.js'); module.exports = entryArray.map(entry => { // Chop off the leading src/ and the trailing .main.js. let key = entry.replace('.main.js', '').replace('src/', ''); // And navigate correctly; this runs from root, but __dirname points here; we have to navigate back to src to get glob and webpack to agree. let value = path.resolve(__dirname, '..', entry); return { mode: "development", name: key, entry: {[key]: value}, output: { path: path.resolve(__dirname, '../dist', key), filename: 'main.bundle.js' }, module: { rules: [ { test: /\.c?jsx?$/, exclude: /node_modules/, use: "babel-loader", }, { test: [/\.vert$/, /\.frag$/], use: "raw-loader" }, { test: /\.(gif|png|jpe?g|svg|xml)$/i, use: "file-loader" } ] }, plugins: [ new CleanWebpackPlugin({ root: path.resolve(__dirname, "../") }), // For phaser: new webpack.DefinePlugin({ CANVAS_RENDERER: JSON.stringify(true), WEBGL_RENDERER: JSON.stringify(true) }), ] }; });
and the corresponding webpack/prod.cjs:
const merge = require('webpack-merge'); const path = require('path'); const {CleanWebpackPlugin} = require('clean-webpack-plugin'); const HtmlWebpackPlugin = require('html-webpack-plugin'); const TerserPlugin = require('terser-webpack-plugin'); const ZipPlugin = require('zip-webpack-plugin'); const base = require('./base.cjs'); module.exports = base.map(v => ({ ...v, mode: "production", output: { ...v.output, filename: 'main.min.js' }, devtool: false, optimization: { ...v.optimization, minimizer: [ new TerserPlugin({ terserOptions: { output: { comments: false } } }) ] }, performance: { ...v.performance, maxEntrypointSize: 900000, maxAssetSize: 900000 }, plugins: [ new CleanWebpackPlugin({ root: path.resolve(__dirname, "../") }), // Since we already peeled off the clean webpack plugin... ...v.plugins.slice(1), new HtmlWebpackPlugin(), new ZipPlugin({}) ], }));
0 notes