Home
Home">Continue Reading »
I have a number of thoughts about how to model characters (individual sapients) in Frasgird; I’ll try to capture some of them here.
First, as much as possible, I would like the character ‘model’ (attributes, status, available choices, other info) to be consistent across all races of sapients and across the player, all major non-player characters (NPCs), and frankly all minor NPCs as well (though mobs and populations can be modeled in aggregate). This makes three modes easier: one in which the player can switch characters (for some period) during the middle of the game; one in which the game universe runs on without you; and one in which two or more humans are playing in the same game universe.
Second, Sundog used a pretty classic set of user-selectable attributes (strength, dexterity, intelligence, charisma, luck), as well as some ‘current status’ attributes (vigor, health, rest, nourish[ment]). However, Frasgird isn’t intended to be a classic RPG, but runs bit higher on the abstraction level. I’m inclined to use some variant of my TEPES model: talents; experience; professionalism (maturity, emotional stability, etc.); education; and skills. Thus, if my character has a given talent, then that gives me certain baseline skills in related fields; choosing relevant education and/or experience improves that skill much faster. If I seek to develop a skill while lacking related talent(s), it takes a greater investment in education/experience, and that skill is capped at a lower level. Or something like that.
Third, as I’ve noted in the Sundog chapters I’ve written to date, wearable tech (clothing, embedded systems) can grant abilities and protections that no level of TEPES can. However, said tech is vulnerable to quantum jitters, and the more advanced the tech, the more vulnerable and sensitive it is.
Fourth, we need to find a means for modeling interactions between sapients without getting too far down a hole. Conversation veers (I think) too close to an RPG, and besides, it’s very hard to do. (Wayne and I put a lot of work into the conversation model in Sundog; I was disappointed at how bland and repetitive it was.) Decision choices (such as in the Galactic Civilizations games) again are hard-coded and become repetitive over the course of a long game. I’d really like to come up with a general interaction model that can be used across multiple settings.
One (paper) game design that I looked at 30+ years ago took a then-novel approach that I’ve seen used a few times since, but to a lesser degree than this design. The game was for (I believe) two players, and when they interacted, each player secretly selected one of twelve (12) approaches: compassion; anger; stubbornness; excuses; logic; taunts; disdain; apologies; threats; acquiescence; boasts; and curses. They then simultaneously revealed their approaches, and a look-up was done to see who came out ahead, and by how much. A lookup comparing the two approaches resulted in a negative value, zero, or a positive value. (If both players chose the same approach, it just has an ‘X’, which I suspect means, ‘try again’, just as in Paper Rock Scissors.) I don’t have more details than the list of approaches (and the lookup values), but I believe the idea was that the starting ‘interaction’ value was zero (0); the two players went through some number of interactions, with the lookup value being added to the interaction value each time; and that at some point, the interaction ceased, with the result — the ‘winner’ — being determined by the sign and magnitude of the final interaction value.
The game design notes I have show complete symmetry in the results (except for sign). Thus, if player A chooses Compassion and player B chooses Excuses, the result is +2; while if player B chooses Compassion and player A chooses Excuses, the result is -2; thus, a higher positive value favors player A, while a higher negative value favors player B. The game design doesn’t appear to allow for chain effects — that is, if I start out with Logic and switch over to Anger, does that undermine my prior results?
What I really like about this approach is that I think it provides a simple, consistent, and yet I suspect possibly very effective means of modeling alien behavior — by breaking the symmetry of the results. For example, if a human (A) chooses Compassion and an individual from a particular alien race (B) chooses Excuses, the result might be +1, while if the alien (B) chooses Compassion and the human (A) chooses Excuses, the result might be -5.
Furthermore, the less the player (or NPC) knows about about this particular alien species (via education or experience, or possibly an empathy talent), the more hidden the intermediary results are, until negotiations end and the final result is revealed.
That’s it for now. More here.
3 Reader Comments
Trackback URL | Comments RSS Feed
Sites That Link to this Post