Archive forMarch, 2013

Random vs Luck: A Goal inspired by nethack

(And the template I was using didn’t have bullets.  amazing.  I think this one might be a better choice.)

Life is going better.  It’s got some up and down, but I’m back to coding.  This has me back to thinking about game design theory stuff.

One that’s been a discussion point more than a few times with a bunch of my friends is randomization as part of the gaming experience.

I like many others cannot begin to count the sheer number of times that I have been screwed by the random number gods.  It’s almost without saying that any time randomization enters the picture in a game, it becomes a luck based outcome.  If it’s an RPG, simulation or strategy game of some sort, you encounter the worst monster possible or a 99% chance to hit misses.  If it’s a platformer game or shooter, as you approach the edge of the screen a dispenser that has a random timer between shots will always choose the perfect moment to hit you, as if sniping.

In this sense, Randomness is effectively a synonym for luck.  But I don’t think it has to be this way, I think it just ends up this way out of both sloppiness and carelessness.  To demonstrate what I mean, I almost have to switch gears for a lengthy lead in:

One of the strongest traits I think a game can possess is uniqueness.  Each game should be a new experience of some form, or there really wasn’t much point to it.  Usually you only get this experience once per game, and everything is the same the next time you start the game, making a second play of the game a mere test of memory.

Making the same game give that new and unique feel more than once is an ambitious task, generally accomplished by either multiple difficulty settings, picking a different character, branching stories where each branch has a different ending, the player using self imposed challenges, having a competitive multiplayer feature, or randomized content.

While most choose to implement one or two of those, Randomized content is relatively uncommon – it’s easier to build more than one scenario than it is to build rules to make a multitude of scenarios by throwing dice.  One kind of game that embraces the challenge of creating random scenarios is a very specific subgenre of RPG: the Roguelike.   Many games will randomize to some extent – RPGs are fundamentally based on the dice rolling mechanisms of tabletop Roleplays, but roguelikes take it a step farther than most.

For those unfamiliar with the genre, it is characterized by being like the 1980 game Rogue, a text based dungeon crawler heavily influenced by Dungeons & Dragons, notorious for a learning curve that few games can match.  The game begins with you being dumped unceremoniously in the entrance to a dungeon either wearing crude, borderline useless gear or stark naked.  The dungeon is randomly generated and your goal is generally to get to the bottom/top.   Once there you might have to retrieve an item for someone or kill a god, and your journey is marked by many deaths along the way.  (While not a “pure roguelike”, a modern roguelike fused with arcade beat’em up action would be the Diablo franchise.)

And while your journey features many deaths, there’s a bit of a sting to it: Roguelikes generally feature permadeath, meaning once your character dies, that’s it.  Reroll and start from scratch.  Because the game is random, you are forced to memorize not a dungeon map, but to memorize the bestiary, develop strategies, and learn techniques for survival.

The roguelike became popular with a niche because it was skill rewarding.  If you learned, and played it carefully, you were rewarded by further progress.  If you were impatient and careless you were probably going to destroy your keyboard in frustration from having to start all over.

My introduction to the genre came years ago with Castle of the Winds, on a then relatively new 386, and it took a while for me to figure out that Castle of the Winds was not the same kind of RPG as Dragon Warrior and Final Fantasy.  (It was the traps and the D&D style  that always got my young and inexperienced self.)

More tame than most roguelikes, Castle of the Winds had rich graphics for the time, was relatively simple by roguelike standards, and did not permanently erase your save if you died, allowing you to roll back from a mistake.  Give it a shot if you’re curious about the genre- the author switched it from shareware to freeware some years after it had become irrelevant, and it should be easy to find a full copy for download.  (His personal webpage disappeared in a server crash late last year, it was available for DL there)   It doesn’t work under x64 versions of windows, but should run fine in compatibility mode under any 32 bit version of windows.  (WinXP in a VM is great for this.)

At the opposite end of the spectrum are intimidating monster games like Nethack.  Nethack is the epitome of Learning Cliff.  Hack was a follow up to Rogue very early on, and development was picked up and it was renamed “Nethack” long before the internet even became popular.  Nethack is still maintained and widely ported to this day as an open source software project.  It’s one of those games you almost have to play once, just to understand what the fuss is about.    There are also graphical mods and remakes available, such as Vulture or just “tileset” versions that are the same Text interface with 8-bit tiles layered overtop.

To get a grip for what I mean by “learning cliff”, some examples:

  • you can write magic words in the floor with everything from writing in dust with your finger, using ink or blood with a pen, or using a hammer and chisel.  These magic words have to be learned by simply finding the writings of previous adventurers on the floor and guessing what the missing letters are, as letters fade with time.  These words are located semi randomly (random chance to find them in certain areas) and you almost have to learn the entire dictionary of magic to beat the game.
    (Edit: Apparently, I might be mixing up games.  I went looking at spoilers last night and either I’m not looking at the right spoiler, or only one engraved word has a magic effect.)
  • Status ailments can be cured through a multitude of ways, some ingenious, some obvious.  The most simple being to pray to your god.
  • I might be remembering another roguelike and not nethack, but I believe it’s nethack where if you pick up a basilisk corpse with your bare hands you will turn to stone.  At the same time, you can also put on gloves, pick up the corpse and use it as a flail.  It petrifies your gloves, but you now instantly petrify any enemy you hit!  Enjoy your collection of dragon statues you can’t loot!
  • A mid game tactic is to leverage different kinds of magic against your junk items to transform them into different items altogether.

And that’s just some highlights. The other side of the game is actually kind of odd: Nethack is fun to lose.  There are so many ways to die in Nethack that you will rarely die the same way twice.  Some are comical, some are not, my favorite tale of woe comes from the time I triggered a cursed treasure chest in a shop, was blinded, tripped over a boobytrapped chest which exploded, injuring a level 25 shopkeeper who upon being damaged, promptly crushed me in one hit.  (There was a little more to the domino effect, but I was laughing so hard that only those highlights stuck in memory.)

My experience with Nethack is why I believe procedurally generated (read: random) content extends a game’s longevity.  You will find many posts all over the internet from people who have been playing nethack for years but “just recently X years later beat it”.  (I saw one blog post where a guy said “20 years later, I finally beat nethack”)

However, roguelikes are a curious mix of random and luck.  While being completely random, they aren’t as luck based as you would think – usually the right knowledge, patience, and methodical play will minimize the luck component.  While you explore and learn, your early runs are  mostly luck based, but as you die and learn ways to defeat and overcome the random number god, the randomization itself keeps the game feeling new, exposing you to a mix of old and new challenges even though you’re on your umpteenth dozen run.

That form of random content inspires me.  While I don’t believe most roguelike mechanisms “as is” are fit for mainstream gaming (I fully recognize most people don’t have the patience for the more hardcore roguelikes), I love the goal of a game creating it’s own balanced content to make replays fun.  I also like the idea of people “learning how to survive” and being exposed to new and different challenges instead of “memorizing the game that never changes.”

Another place where modern gaming has worked to mitigate the luck part of randomness comes from leveraging statistics to normalize your dice rolls.  Under a normalization scheme, a time unit is chosen and used to guarantee quoted chances.  If you have a 20% chance to crit, and attack 30 times a minute, 6 attacks per minute will crit as an enforced chance.  This can be accomplished a number of ways, and it takes away from the feeling of being cheated by random chance – the enforcement of statistics makes you do exactly the damage you expected to.

Statistical normalization is not a new concept.  Some tabletop roleplayers, convinced the dice are really out to get them, who always roll too low to accomplish anything, will replace their dice for a deck of cards or a bowl of numbers.   The idea is simple.  A deck of 52 cards is effectively a 12 sided dice with an equal number of chances for each result.  There are only 4 chances to roll a 1, equal to the number of chances to roll a 12.  You will accomplish the exact same statistical average as with dice, but where dice are random, there is a predictable pattern – the same pattern that blackjack players use.  You can count cards and make realistic risk assessment.  No more 1’s in the stack?  You can take more risks, etc.

I can’t help but think there must be some way to unify randomization and skill to create a randomly generated game that doesn’t really require luck.  Perhaps this is just a naive goal, but this is something I intend to work on and flesh out as I work through projects.  Again, I think that “new” and “unique” feeling that each game has on the first playthrough is important.  I’m down for an achievement run as much as the next guy, but if a game can keep showing me something new, it’s good.

Sadly, everything I’m working on right now is static.  I’m working with baby steps while I ponder carefully over long periods of time and I hope the time will give me a chance to arrive closer at some ideas to achieve that goal.

If nothing else, I need more time to learn how to use Randomization in general.  After all, Random is something that has to be done well – if your random pool is too small and gives predictable results…  It really isn’t random anymore, is it?

Comments

Planning a Disaster Recovery

Well, February was a lost cause for me, between the flu and three emergencies that came in that required me and not the old man.   I fell behind on everything, and ended up in emergency “I put out fires” mode until this week.   As bad as it was for me, it was almost worse for one of my customers.

Since I haven’t updated routinely in a while, I’m glad to get back to weekend updates.  That aside, I feel compelled to share some of the knowledge from the most labor intensive project, some of the planning and the things learned may serve to be useful for people who have data to preserve, although I think most of it is common sense and I’m probably preaching to a choir.  Forgive me if I oversimplify or fail to simplify, both are possible given the technical nature of the topic:

On Feb 4th, one of my customers had his office broken into and his Laptop- his primary work computer- stolen.

Since this customer is in the financial industry, he was required to have a disaster recovery plan for reestablishing operations within 24~48 hours of a disaster. (I’m not sure which set of rules he falls under, or I’d reference them.)  We updated his recovery plan about 4 years ago, and about eight months ago we helped him obtain a new office computer as the old one was dying a slow death.

Disaster recovery plans are interesting things.  Fundamentally they boil down to coming up with a good answer to the question of “How do I not lose anything?” and on the computer side of things, redundancy is the answer.

In the case of my customer, I created a minimum of four levels of redundancy in the form of a cloud backup service (Insert Carbonite sales pitch here) and a local network backup to another computer in his office.  This is a fairly common and generic plan, and it probably falls short of what the industry considers “best practices”, but I’m sharing it since it’s easy to set up and is relatively inexpensive.  If you have a desktop, a laptop, and are willing to invest in a cloud backup program or purchase extra space on something like skydrive/dropbox/etc, a little bit of time will let you implement the same or a similar plan to protect your own info.

The logic goes something like this:

  1. In the event his system failed, he could swap to the backup system while purchasing replacements.Technical note – The backup in the implementation we used is provided by having a second windows computer with a shared folder and using windows sync center to synchronize the contents between the two computers over the network.  This process is fully automatic with the exception of “collisions” – situations where the local copy and remote copy are changed, or one is deleted.  These have to be resolved manually, and there’s an interface for resolving that that pops up down by the system tray/notification area.There are other third party programs that will do this service as well, a few of which are able to use Volume Shadow Copy to synchronized locked files like outlook PSTs/OSTs.  These programs are generally not free, although some offer crippleware/trialware versions.
  2. If the backup system fails, he still has cloud backups to rely on while the backup is replaced.
  3. If the cloud service is shut down, experiences an outage, or freezes his account he still has local backups.  In this case, I happen to know that Carbonite keeps redundant copies of information in more than one data center, meaning the technical side is unlikely to experience outages.

This specific incident fell under 1.  When the call came in, before we realized how much was required, my dad went out to the customer’s office to handle it himself.  He discovered that the thieves had left the backup computer alone, and quickly shifted it over, reestablishing operational minimums immediately as virtually all the information was already there.  I was pulled in to get the CRM software reinstalled and recover some minor stuff that couldn’t be marked for synchronization with windows sync.  (Databases that employ file locking are a pain, more on that later.)

Once we had basic function restored, the curveball that caused the majority of the work came: He wanted to take the opportunity to update all of his computer use habits and more forward on cloud adoption as his industry’s periodicals are emphasizing taking advantage of the cloud.  We spent the next 4 weeks improving the recovery plan,  purchasing replacement equipment and software, and fixing the hiccups that inevitably come from upgrading all of your 5~10 year old software at once.

Some useful observations from this process, generalized:

Consider what kind of backup you need.

 While automated cloud services are great for most things (For example, they met all of my customer’s needs), occasionally you will need a backup where you can roll back to an earlier point.  Most cloud backup services will not offer incremental rollback, and if they do it will rarely go back farther than 30 days – they’re doing good to keep ahead of their customers’ CURRENT data requirements.  For those files where revisions matter, revision control software like Git can be a lifesaver.  (Git is also not the only of it’s kind, just the most popular)

Revision control software is typically used by programmers to keep track of their work, so if they break something that they can go back and see what it looked like before they broke it.  While Git isn’t intended for backing up personal data it works remarkably well in that role.   As Git is intended for programmers, it might be a little hard to set up your own private git server if you aren’t one, but there are some nice front-end tools for it and some great guides on using it.  A possible remote backup strategy might be using a cloud service to back up your Git repository, or subscribing to a closed-source git hosting service.   That closed source note is important.  Most of the Git hosting options you can sign up for are not private.  Verify you can keep private what needs to be private before you start using something like Github.

When in doubt, you can always fall back to the tried and true practice of creating your own local incremental backup.   Most classical “backup to CD/DVD/tape” software will feature an “incremental backup” option.  While not as convenient, it does work and will give you a fallback option, albeit with greater amounts of nannywork.

Beware of any database driven application that you are trying to backup with automated cloud backup services that simply backup files.

In particular, anything that uses MSSQL or MSSQL Lite seems to be particularly vulnerable to this, but MySQL, Btrieve and other variations of databases might have similar issues: there’s a resource conflict that occurs when the automated backup service and the database service both access the same file at the same time.  Certain programs are able to use Volume Shadow Copy to touch “in use” data without triggering a conflict, but it still doesn’t work very well.  You can usually identify these programs because they either say “installing sql” as part of their initial setup, or will just stop working when you add the data to your cloud backup program’s list of things to back up.  They also might throw a fit if you encrypt them with bitlocker or the like.

Database driven applications in general have their own backup process.  In the case of most end user programs, they feature a menu option for creating a manual backup.  For actual server based database tools, there’s typically instructions somewhere on the internet for using a database export option to create a backup, and often ways to automate it.

Make sure that what you’re trying to back up is actually storing it’s data where you think it is.

There’s more than one way this common problem can rear its head.  Verify that you’re backing up the right files by checking for Software data backup/maintenance instructions on product websites and forums.  There’s also utilities out there to help with this, although I couldn’t tell you much about those.

Besides just failing to back up a file you can end up restoring data to the wrong place or in the event of having three or four versions of a file, simply restore the wrong version.  The “last changed” date also may not be correct, which is the issue I ran into with a particularly terrible CRM program- the software’s “data file” is nothing more than an XML document specifying the folder that the actual information is stored in, and so it will only show the date the original file was created on, while other files in subdirectories will show the correct date.  This is not uncommon with database driven applications, either.

Other things to be mindful of are the C:\programdata\ folder and the (user)\appdata\ folders.  Lots of files end up in those folders, usually they’re not data so much as user settings and temp files, but some major software hides it’s data files there, such as Firefox, Chrome, Outlook, Minecraft. etc.   Minecraft players will take note that their lives can be summed up by the size  of the (user)\appdata\roaming\.minecraft\world\ folder.

Some software from the Windows 3.1~XP era are also notorious for saving data inside their program folder or straight out of the C drive, but this is not possible if you are using UAC and decline to grant a program administrator rights.  The go to example here is old copies of quicken, which would default to storing all of its information in C:\quickenw\.  If a program like this lets you choose where to save, save it in your user folder to simplify things.  (This even is good advice on non-windows operating systems, the only thing all the operating systems agree on is that a user’s user folder is his to do with as he pleases.)

When recovering, verify data integrity THEN upgrade applications.

It’s an easy thing to have a data file with corruption issues creep along unnoticed for years because it doesn’t affect the version of software you’re using, but then all of a sudden when you go to upgrade, it throws all sorts of errors.  It’s a very easy mistake to install the latest version of a program and upgrade your restored backup all in one go, only to end up with unusable data that you blame on a bad backup.

This one has an analogy in cars – You have an old car, it runs good and you never have problems with it, lend it to a family member, they experience all sorts of problems with it because they’re expecting something different than what they get.  It’s the same with different versions of a program and the data they expect.  If it expects something different than what it gets, sometimes it will just say “I can’t upgrade this it’s corrupt”.

If there’s something wrong with the data, be prepared to run in circles while you figure out ways to generate a clean copy for your software.  If the software doesn’t offer a data repair tool (most do), A common and easy fix is to open it up in the old program and create a manual backup/export file, if the application allows for it.  It doesn’t always work, but many times it will.

Plan for new hardware before recovering from a failure.

If you are integrating new devices at the same time you’re restoring data, Plan the order you set things up in advance before you start restoring data to minimize complications.In the case of my customer, he made the decision based on his office building’s insecurity to relocate his local backup computer from the workplace to his house, requiring a rework of his backup implementation.  The solution we chose was to install skydrive on both computers to sync them through the cloud.  A similar cloud service would be required if he had chosen to add a Tablet to his workflow, depending on which kind of tablet.

Implementing earlier than later allowed me to make the experience appear seamless,  installing skydrive on his new computer in my workshop before delivery and allowing me to sync his data overnight on 35 Mb/s internet rather than 7 Mb/s internet in his workplace.

You can usually save yourself some time or work just by doing a little bit of planning.  Still, don’t forget to be leery of running upgrades from the start.

Don’t buy an upgrade without planning for the upgrade after.

When upgrading or replacing obsolete software during a recovery, be careful to choose software based on how progressive the company’s adoption of new technology is.  Cloud and Handbrain (Smartphone/tablet) adoption is the current trend, and at the moment it’s almost impossible to tell what tomorrow’s trend will be, but it’s important to keep an eye out for trends while they’re still optional.  Many software companies will not survive the transition into the Handbrain era, others will bend over backwards to support every platform that has any semblance of mainstream implementation.   (And sadly, it has been decided we’re moving into the handbrain era, regardless of if that’s actually the right choice- that’s another rant, tho’.)

My personal approach to this is to identify the things that would be important to have accessible on the go and look for companies trying to fill that need.  Most people don’t want to fight with a touchscreen input system for something like word processing, but would be happy to see/change their schedule on the fly.  The more convenient something would be, the more likely someone is to attempt to sell it.  If you choose wisely, you can enjoy these conveniences, if you choose poorly you could be looking at everyone around you thinking “I need to get that program…

On a related note, keep a saved wishlist of parts with a vendor that meet your technical requirements, and update them every 6 or so months.  This way if something fails you don’t have to go shopping around, just verify that your parts list is still current and place your order.

Keep a local copy of your info.

When choosing cloud software, always look for a provider who allows you to keep private backups.  There is very little as frustrating as losing all of your work to an account closure.

This is one reason why I like the cloud services that sync data to your harddrive, such as skydrive, google drive, and dropbox, as opposed to straight SAS software with no local copy.  If Microsoft were to arbitrarily decide a file somehow violated their licensing agreement for whatever reason, there would still be local copies.

If you take most of this into account, you should do pretty well at protecting data and knowing what you need.  Now a short list of things to avoid when designing a recovery plan:

Excessive paranoia about security.  

I don’t want to get too in depth on security here because it’s a complicated topic  and could easily take this post from 3k words up to 10k and beyond.  And that’s just going through the highly plausible list – if you review every possible, every imaginable case you could be dealing with security forever.  Building a plan that’s “Good enough” for your specific purpose and features easy to implement damage control options is generally superior to making a perfect plan that has multiple layers of encrypted backup stored in five or more locations that never touches the cloud.  Going over the top is like  the Scooby Doo antagonists and their elaborate schemes- in the end, something will happen to your perfect plan and it’ll be the fault of those meddling script kiddies.

That’s not to say that security is a bad idea.  There are some very nice folder encryption tools that are available specifically for encrypting your dropbox or skydrive folder.  Just keep it simple.   “I have a program to encrypt this and I keep a backup of the decryption key in a secure place” is an easy plan to implement with a minimal amount of work and expense.  Two factor authentication is also slowly becoming the new norm.   (I strongly encourage you to turn on two factor authentication if you’re willing to put up with it.)

Generally speaking, the information you’re storing dictates the level of security you need, and some information should simply not be stored digitally at all.  Most files on a computer are not worth encrypting.  Sure, it might be embarrassing if someone hacks your computer/cloud account and your vacation photos end up on reddit, causing your derpy tourist moment to become a new meme, but that’s not going to obliterate your life the same way it would if that hacker discovered you’ve built what to him is the ultimate identity theft kit in a Word doc.   Bank account numbers?  Social Security numbers?  Credit card numbers?  Scans of your birth certificate?  I personally wouldn’t keep them on a computer, but if you do, that data does need to be encrypted.

All that time spent making a perfect plan?  Could be better spent combing your computer and cloud services to make sure you haven’t left sensitive information stored in an insecure place, such as a PDF/doc/jpg file or in your webmail inbox/outbox, or just spent memorizing a password longer than 6 characters.  (Anything important really deserves a unique 16+ character password, and, well, XKCD’s joke on the subject that almost deserves to be mandatory reading, because I think everyone loses sight of how little it takes to build a long password that’s easy to remember.)

Relying on technology in a way that introduces a sole point of failure.

I’ve emphasized redundancy earlier, so this should be redundant by itself, but any sole point of failure is bad.

The thing that made me think about this one is probably a bit more obscure, and I haven’t seen it in years, but years back I had a computer with RAID Mirroring that used a certain kind of proprietary RAID controller.  More specifically, the kind where you can’t read a harddrive without it being part of the RAID array.  This isn’t quite the redundancy you think it is – Sure, if a drive fails you can swap it without a system outage, but what happens when the controller fails?

For a more down to earth sample, a common mistake is to have RAID Mirroring and because your data is mirrored, you don’t think you need to do farther backups.  In the event of common destruction incidents (Fire/Lightning/theft) where the entire computer or building is obliterated, if you’ve failed to have an offsite backup (cloud service, tape drive, hotswap HDs, rotating USB drives, etc.) you’ve lost data.

A less common but frequent mistake is to use encryption and fail to keep proper backups of your key.   If something happens and you can’t gain access to your key, it doesn’t matter how many backups of your data you have, you’ve just lost all of them to a single point of failure.  Your key backup also falls into a special category where it does require some paranoia to properly secure.  That is to say, you shouldn’t keep it on your computer at all.  If you back it up in a cloud service, it should have two factor authentication enabled.   If you back it up on USB sticks you should back it up on more than one and store them in an appropriate place, like a safe.  If you have a safe deposit box, keeping one USB drive/CD/etc. there along with a paper copy of the key in a legible font at a good size would also be a good idea.

 

Well, that was long, but hopefully helpful.  It was also by no means exhaustive – there’s no end of information on the subject.

Now, to get back to programming!  😀

Comments (1)