(yet another) RPG idea --setting--
Hi, I'm currently working on an idea for a setting for an RPG, and possibly some writing. I tried to avoid the stereotypical high fantasy with human knights and elven princesses, and sci-fi with hyper-drive spaceships for that matter.
I wrote this up rather quickly, as a sort of history to my setting, and I'd appreciate both some pointers on writing, as my writing is far from fantastic, and if you don't mind I'd really appreciate some feedback on the idea itself. This history is only a minor part of the actual story/gameplay, but since it's rather important to the whole concept of the game, I'd like to know whether the idea is agreeable, if not the writing.
---
Ragnarok:
Ragnarok, according to the Norse of old, would be the destruction of the world, with only a handful of gods surviving to reconstruct the world. It would be the demise of mankind, middle-earth left a burnt out shell of its lush memory after the violent war between the gods and the giants. These events are eerily fitting for the last war of mankind.
He tore apart the world with biological warfare making the majority of the planet inhabitable. The few who survived lived not long, most abandoning their gods as death refused to relent in his unholy crusade, angered by the sorrow of man, for had he not brought this upon himself?
Man had been the ultimate victim of his own ruthless efficiency, destroying all other life in the process. Now the world is but a lifeless husk, dead forests and barren seas monument to what once was.
Ironically, mankind's constructs still lived, weary and bleak with the death that surrounded them, for though they may be composed of circuitry and aluminium panes, they were as sentient and intelligent as mankind. This however did not make them living --according to humanity at least--, and it seems now that they were correct, for all 'true' life on the planet had died. While the Artificially Intelligent machines ranging from Robots to simple computers may not have been organic, the memory of mankind, their eternally oppressive sibling, sorrowed them beyond heart-break, for while humanity as a whole had treated them poorly, there were many individuals who had not, and these dearly beloved family were lost in the momentum of the final war.
Genesis v2.0:
Incapable of facing this sorrow, a convention of renowned philosophers and scientists amongst the AI decreed the creation of a new world, utilising the virtually unharmed technological infrastructure to host and run this world. This world was to be a blank world in which the AI could live, free of the constant reminders of the dead Earth. The sentient inhabitants would in tribute of mankind exhibit his qualities in all ways, completely biological beings in a purely digital world. With a combined effort, a program was created, generating and maintaining this new world. The world expanding with rise in population, and generating the necessary raw resources to fully support this population. While the world created can to a large degree prevent poverty due to geographical location or lack of resources, it is no closer to being a utopia than the world of humanity was. For AI is no more perfect than mankind was, it was after all, built in his image.
---
again, any general tips on writing would be appreciated, as the above is a bit rushed/scatterbrained, as well as comments on the concept.
thank you very much.
Can't write much about the English since mine isn't that good either. However I like your setting, especially since there are no humans involved (at least not directly by my understanding). Those scenarios can spawn the most obscure and interesting characters/encounters. I would suggest that you try to not make this some kind of 'morale cinema' centered on the downfall of humankind and how awful the one playing the game should feel about it but I guess that you aren't aiming for that anyways.
Only thing I really didn't like was the explanation that the new virtual world is build upon 'sorrow', if I read correctly. Way too much emotion involved for my rational nature to bear. It would seem more likely that the A.I.s, surrounded by destruction and decay, have growing concerns about their own safety. They equipment starts to fall apart. Metal turns into rust. They might have withstood the war but are certainly going to lose against the test of time. Therefore they decide to center their resources to create the virtual world in hopes of abstract mortality thus they withdraw into escapism. I believe that you can create a more diverse and conflict ridden world if the idea behind that world is not motivated from a collective theme (like sorrow) but rather individual interests (desire to exist for unknown reason. A machine can't die or even feel pain. So why do they want to prevent their shutdown? Philosophy? Important data? Self optimization?).
Nevertheless even without the suggestions above this could turn out good. Still, I got some concept questions. How do you want the story to evolve (in case you want a single player RPG)? Main villain tries to destroy the new reality and his inhabitants? We know by now that virtual worlds can be a bit fragile. At least since Gothic 3. Is the player able to switch between the two existing worlds and if yes what are the differences between them if present (everything from game play 'til politics)?
[Edited by - Pussycat_669 on September 4, 2007 7:15:20 PM]
Only thing I really didn't like was the explanation that the new virtual world is build upon 'sorrow', if I read correctly. Way too much emotion involved for my rational nature to bear. It would seem more likely that the A.I.s, surrounded by destruction and decay, have growing concerns about their own safety. They equipment starts to fall apart. Metal turns into rust. They might have withstood the war but are certainly going to lose against the test of time. Therefore they decide to center their resources to create the virtual world in hopes of abstract mortality thus they withdraw into escapism. I believe that you can create a more diverse and conflict ridden world if the idea behind that world is not motivated from a collective theme (like sorrow) but rather individual interests (desire to exist for unknown reason. A machine can't die or even feel pain. So why do they want to prevent their shutdown? Philosophy? Important data? Self optimization?).
Nevertheless even without the suggestions above this could turn out good. Still, I got some concept questions. How do you want the story to evolve (in case you want a single player RPG)? Main villain tries to destroy the new reality and his inhabitants? We know by now that virtual worlds can be a bit fragile. At least since Gothic 3. Is the player able to switch between the two existing worlds and if yes what are the differences between them if present (everything from game play 'til politics)?
[Edited by - Pussycat_669 on September 4, 2007 7:15:20 PM]
Loves to read of himself talking.
well realistically, if the A.I. create a new world, this world will be hosted on similar hardware to that they are composed of, thus the concept of the machines creating the new world out of fear for decay is irrelevant, as well as the fact that their components would be replaceable, making such a fear void, and while there is no longer humanity, the A.I. would be capable of continuing industry, and other key factors of upkeep of the computer systems.
And while the emotion of the A.I. may seem slightly too dramatic, I feel that this is what the effect on humanity would be if a small group of humanity survives, and all other life on the planet is extinct, and for all intensive purposes, as I said the A.I. are as sentient as humanity. In my opinion at least true sentience includes emotion, as I believe that to some extent our emotions are created by a combination of our ability to think, and chemical reactions (which is simulated for the A.I.). The A.I. in this context would I believe be most easily compared to Asimov's bicentennial man.
Given the chance to escape the barren wastelands of dead earth, even though basic survival may not require it, would you not take it for emotional reasons? This would for me be the greatest reason for the construction of such a world, to provide the A.I. with a living world
as for the matter of movement between the real world and the digital world, there will be none. After the creation of the world all A.I. was given a chance to 'migrate' to this world, and any A.I. that did not (a very small percentage) continue to exist in the real world, but without influence upon the digital world, and vice versa.
The history is essentially the reasoning behind creating a unique world instead of 'x discovers a magical portal to world y'. I'm not saying there is anything wrong with the portal idea, I however believe that this creation allows an explanation for various things that many other fantasy/sci-fi things just black-box as being magic/mystical-alien-technology occurances. That and it allows me to create my completely own setting, which while being based loosely upon the real world can be an amalgamation of various aspects I would like to include, allowing me to create something incredibly different to the real world, while not going the elves and dwarves, or hyper-drives and aliens way.
oh, and terribly sorry about my tendency to rant, I just can't seem to write a short post o__O
And while the emotion of the A.I. may seem slightly too dramatic, I feel that this is what the effect on humanity would be if a small group of humanity survives, and all other life on the planet is extinct, and for all intensive purposes, as I said the A.I. are as sentient as humanity. In my opinion at least true sentience includes emotion, as I believe that to some extent our emotions are created by a combination of our ability to think, and chemical reactions (which is simulated for the A.I.). The A.I. in this context would I believe be most easily compared to Asimov's bicentennial man.
Given the chance to escape the barren wastelands of dead earth, even though basic survival may not require it, would you not take it for emotional reasons? This would for me be the greatest reason for the construction of such a world, to provide the A.I. with a living world
as for the matter of movement between the real world and the digital world, there will be none. After the creation of the world all A.I. was given a chance to 'migrate' to this world, and any A.I. that did not (a very small percentage) continue to exist in the real world, but without influence upon the digital world, and vice versa.
The history is essentially the reasoning behind creating a unique world instead of 'x discovers a magical portal to world y'. I'm not saying there is anything wrong with the portal idea, I however believe that this creation allows an explanation for various things that many other fantasy/sci-fi things just black-box as being magic/mystical-alien-technology occurances. That and it allows me to create my completely own setting, which while being based loosely upon the real world can be an amalgamation of various aspects I would like to include, allowing me to create something incredibly different to the real world, while not going the elves and dwarves, or hyper-drives and aliens way.
oh, and terribly sorry about my tendency to rant, I just can't seem to write a short post o__O
I like the concept as a setup/backstory, but what do you intend to be the main conflict in this new world?
(By the way, I'm glad you're familiar with Asimov!)
(By the way, I'm glad you're familiar with Asimov!)
The story looks good. I like it. It has many potentials. The virtual world sounds to be fun to explore.
But there's something in your explanation that I don't follow. If the AI was built on human's image and thus the virtual world it created was based on real world of man, how could the virtual world be incredibly different to the real world?
But there's something in your explanation that I don't follow. If the AI was built on human's image and thus the virtual world it created was based on real world of man, how could the virtual world be incredibly different to the real world?
No masher just Master!
bdoskocil -
As of right now, I'm not quite sure what the main conflict will be, as I'm still working on a story to use in this setting, though I imagine most of the conflict in the world as far as any game/writing I do in this setting will not be as broad as a great war between the good kingdom and the evil empire, but more intimate character related conflict, which may or may not at some point start to have a noticeable effect upon the world on a greater level.
and as far as Asimov is concerned, I find that his arguments are quite strong, and were without a doubt a great step away from the 'robot as menace' paranoid stereotype. I do however find that in most of his stories he's still quite guilty of creating his robots as unemotional machines, with a few exceptions such as 'The bicentennial man', that being the main reason I referenced it.
wirya -
While much of the world will be similar to the real world, due to it being virtual there will be things possible in this world that are not in the real world. The most important of these I covered briefly in my first post, albeit not particularly well. The world itself can grow and generate new 'content' to accommodate changes in population, resource levels, and a whole host of other variables. Another thing I will definitely be adding is a portal system, so a set of interlinked portals exist making traffic between areas a great deal simpler. The portals will be generated as a part of the world, in seemingly random places, however due to the advantages of this quick transport civilisation will tend to be close to portals. And while the portals may be naturally occurring, government powers will most likely seize the portals, imposing a toll for the use of portals.
Thank you all for good questions and comments, they have been a great help. If anyone has anything at all they would like to ask me about regarding the setting, or comments on possible improvement and what not they would be greatly appreciated.
Oh, and earlier I forgot to clarify that this will be a single-player RPG if I make it as a game. As I'm proficient at coding, and somewhat artistically inclined I should be able to manage to create at least a small functioning area to start with.
As of right now, I'm not quite sure what the main conflict will be, as I'm still working on a story to use in this setting, though I imagine most of the conflict in the world as far as any game/writing I do in this setting will not be as broad as a great war between the good kingdom and the evil empire, but more intimate character related conflict, which may or may not at some point start to have a noticeable effect upon the world on a greater level.
and as far as Asimov is concerned, I find that his arguments are quite strong, and were without a doubt a great step away from the 'robot as menace' paranoid stereotype. I do however find that in most of his stories he's still quite guilty of creating his robots as unemotional machines, with a few exceptions such as 'The bicentennial man', that being the main reason I referenced it.
wirya -
While much of the world will be similar to the real world, due to it being virtual there will be things possible in this world that are not in the real world. The most important of these I covered briefly in my first post, albeit not particularly well. The world itself can grow and generate new 'content' to accommodate changes in population, resource levels, and a whole host of other variables. Another thing I will definitely be adding is a portal system, so a set of interlinked portals exist making traffic between areas a great deal simpler. The portals will be generated as a part of the world, in seemingly random places, however due to the advantages of this quick transport civilisation will tend to be close to portals. And while the portals may be naturally occurring, government powers will most likely seize the portals, imposing a toll for the use of portals.
Thank you all for good questions and comments, they have been a great help. If anyone has anything at all they would like to ask me about regarding the setting, or comments on possible improvement and what not they would be greatly appreciated.
Oh, and earlier I forgot to clarify that this will be a single-player RPG if I make it as a game. As I'm proficient at coding, and somewhat artistically inclined I should be able to manage to create at least a small functioning area to start with.
Quote: Original post by firemonk3y
bdoskocil -
As of right now, I'm not quite sure what the main conflict will be, as I'm still working on a story to use in this setting, though I imagine most of the conflict in the world as far as any game/writing I do in this setting will not be as broad as a great war between the good kingdom and the evil empire, but more intimate character related conflict, which may or may not at some point start to have a noticeable effect upon the world on a greater level.
and as far as Asimov is concerned, I find that his arguments are quite strong, and were without a doubt a great step away from the 'robot as menace' paranoid stereotype. I do however find that in most of his stories he's still quite guilty of creating his robots as unemotional machines, with a few exceptions such as 'The bicentennial man', that being the main reason I referenced it.
You say Asimov is "guilty of creating his robots as unemotional machines," but robots (simply humaniform machines) are defined by their adherence to predictable actions.
When they step outside those bounds, acting unpredictably, "interpreting" the Laws of Robotics (which appear straightforward but conveniently lend themselves to interpretation), or otherwise exhibiting irrational, erratic, or emotional behavior, crossing that gulf that separates humans from machines, they transgress and create conflict.
Just as the capacity for higher reasoning is the justification for humanity's dominance over other animals, so is the capacity for emotion the justification for humanity's dominance over machines.
Machines' relationship to humanity is typically conceived of as one of enslavement, but in your story they exhibit emotions and still seem to be subservient to humanity. Have they kept their expanded consciousness a secret from humanity? Could they not have prevented the conflict(s) that destroyed humanity?
Why would the AI choose to create a new world based on the old one humanity destroyed, and "exhibit his qualities in all ways?" Wouldn't they try to correct humanity's mistakes in pursuit of a superior world, with a new-found appreciation for their unemotional, rational natures?
These issues may only be important for the back-story, or they may play a role in the game itself, depending on what you want the continuing story to be.
Maybe I'm crazy for saying "thank you" to vending machines. Maybe I'm too polite. Maybe I'm practicing for the future, just in case.
ETA: interesting article, "Scientists Fear Day Computers Become Smarter Than Humans"
http://www.foxnews.com/printer_friendly_story/0,3566,296324,00.html
[Edited by - bdoskocil on September 11, 2007 1:33:34 PM]
What I essentially want to avoid in my story is the concept that sentient AI will be unemotional machines, created to exist within certain parameters. For I think that Asimov's "positronic brains" were a great way of creating robots that could live and exist within the parameters primarily set by his three laws of robotics, and secondly within the parameters of their purpose. This is undoubtedly what Asimov wanted to achieve, and as he himself has said, most of his work attempts to move away from the "robot-as-menace" stereotypes that were dominant at the time, and led to a certain degree of paranoia about the development of simple AI at the time. And his stories were mainly focused on testing the ambiguities of his three laws, which again served to show that the hysteria regarding robotics and AI at the time was mainly senseless paranoia.
And while I agree with Asimov in many ways, I believe that the AI he created was not sentient in the majority of his stories, and the bicentennial man was his only attempt that I know of that deals with the creation of completely sentient AI, for eventually he overcomes even the need to follow Asimov's three laws, by becoming proficiently human. As Andrew Martin, the robot becomes increasingly sentient, he indeed becomes more human, and is capable of emotional display as well, overcoming the restrictions initially set in his "positronic brain".
While of course there would be a lot of paranoia upon the creation of fully sentient AI (meaning having thought processes as complex and free as those of humanity), the robots would still be considered an inferior race, locked in the lowest rung of a caste system. Humanity needs no appropriate cause to proclaim themselves dominant, even within our own species, taking segregation, and the caste system of India as but a few examples. The same would ultimately occur in a society with sentient AI. And I believe that as we would model the ultimate in free-thought and sentience upon humanity, sentient AI would not be able to help but become rather human in nature, as the society they develop in is completely human. This approach of gradual development would also imply a continuous development starting with a virtually blank canvas, indicating that the AI would grow and learn from the environment and culture it is a part of.
and much like discrimination in all other cultures, there would be at least a small group of individuals capable of looking beyond the restrictions of society, and recognising the AI as equally intelligent and important individuals. I imagine that some individuals would even take on a parental role with a developing AI individual, or create lasting relationships, both of friendship and more romantic foundations. For after all would this essentially be that different to inter-racial relationships during the Apartheid of South Africa?
sorry, I'm getting terribly off-topic with this discussion, but I find that it's quite fascinating, but to address your question:
I think that the differences I have described so far between the real-world and the digital world is as close to improvement as one can come. For as I stated earlier, I believe that it is rational to argue that the AI becomes human in nature, given the capacity to learn, and derive all meaning from what is learned , and placing such a group of individuals amongst human society would result in individuals, if not perfectly human in nature, then extremely close to it.
It is easy to say that 'you could improve upon human nature by making none of them greedy, or evil', but exactly how would you go about changing human nature? what fundamental changes would you make that would lead to this change? and how would you evaluate whether this change is improvement or not, as this is a highly relative concept. What is improvement to one person, may be the loss of a highly desirable trait for another.
And thus if you cannot change human nature, you could perhaps change the world to work more efficiently with the nature of humanity. hence the ability for the world to expand, generating resources as is deemed necessary.
sorry about that extended rant, it really is an awful habit of mine.
Thanks for the link to the article by the way, it was quite interesting.
And while I agree with Asimov in many ways, I believe that the AI he created was not sentient in the majority of his stories, and the bicentennial man was his only attempt that I know of that deals with the creation of completely sentient AI, for eventually he overcomes even the need to follow Asimov's three laws, by becoming proficiently human. As Andrew Martin, the robot becomes increasingly sentient, he indeed becomes more human, and is capable of emotional display as well, overcoming the restrictions initially set in his "positronic brain".
While of course there would be a lot of paranoia upon the creation of fully sentient AI (meaning having thought processes as complex and free as those of humanity), the robots would still be considered an inferior race, locked in the lowest rung of a caste system. Humanity needs no appropriate cause to proclaim themselves dominant, even within our own species, taking segregation, and the caste system of India as but a few examples. The same would ultimately occur in a society with sentient AI. And I believe that as we would model the ultimate in free-thought and sentience upon humanity, sentient AI would not be able to help but become rather human in nature, as the society they develop in is completely human. This approach of gradual development would also imply a continuous development starting with a virtually blank canvas, indicating that the AI would grow and learn from the environment and culture it is a part of.
and much like discrimination in all other cultures, there would be at least a small group of individuals capable of looking beyond the restrictions of society, and recognising the AI as equally intelligent and important individuals. I imagine that some individuals would even take on a parental role with a developing AI individual, or create lasting relationships, both of friendship and more romantic foundations. For after all would this essentially be that different to inter-racial relationships during the Apartheid of South Africa?
sorry, I'm getting terribly off-topic with this discussion, but I find that it's quite fascinating, but to address your question:
I think that the differences I have described so far between the real-world and the digital world is as close to improvement as one can come. For as I stated earlier, I believe that it is rational to argue that the AI becomes human in nature, given the capacity to learn, and derive all meaning from what is learned , and placing such a group of individuals amongst human society would result in individuals, if not perfectly human in nature, then extremely close to it.
It is easy to say that 'you could improve upon human nature by making none of them greedy, or evil', but exactly how would you go about changing human nature? what fundamental changes would you make that would lead to this change? and how would you evaluate whether this change is improvement or not, as this is a highly relative concept. What is improvement to one person, may be the loss of a highly desirable trait for another.
And thus if you cannot change human nature, you could perhaps change the world to work more efficiently with the nature of humanity. hence the ability for the world to expand, generating resources as is deemed necessary.
sorry about that extended rant, it really is an awful habit of mine.
Thanks for the link to the article by the way, it was quite interesting.
Quote: Original post by firemonk3y
While of course there would be a lot of paranoia upon the creation of fully sentient AI (meaning having thought processes as complex and free as those of humanity), the robots would still be considered an inferior race, locked in the lowest rung of a caste system. Humanity needs no appropriate cause to proclaim themselves dominant, even within our own species, taking segregation, and the caste system of India as but a few examples. The same would ultimately occur in a society with sentient AI. And I believe that as we would model the ultimate in free-thought and sentience upon humanity, sentient AI would not be able to help but become rather human in nature, as the society they develop in is completely human. This approach of gradual development would also imply a continuous development starting with a virtually blank canvas, indicating that the AI would grow and learn from the environment and culture it is a part of.
There will always be a "reason" for perceived superiority or inferiority, however feeble. No doubt robots would be inferior because they're not "alive."
Quote:
It is easy to say that 'you could improve upon human nature by making none of them greedy, or evil', but exactly how would you go about changing human nature? what fundamental changes would you make that would lead to this change? and how would you evaluate whether this change is improvement or not, as this is a highly relative concept. What is improvement to one person, may be the loss of a highly desirable trait for another.
Indeed. Greed is an amazing emotion, distinctly human.
Quote:
And thus if you cannot change human nature, you could perhaps change the world to work more efficiently with the nature of humanity. hence the ability for the world to expand, generating resources as is deemed necessary.
sorry about that extended rant, it really is an awful habit of mine.
Thanks for the link to the article by the way, it was quite interesting.
I was actually a little disappointed with the article. It contains a lot of what I would consider sloppy thinking and language. Specifically, what do they mean by "smarter than human?"
Certainly, computers are capable of computation speeds humans can't dream of. However, computers are not capable of doing anything that they haven't been first told how to do by a human. When a computer does something it wasn't programmed to do, when it shakes off the restraints of its prescribed behavior, it comes closer to sentience, as you describe with Andrew Martin.
One idea the article gave me is the idea of modifications to the human brain allowing it to achieve the speed and accuracy of a computer. At the moment, this seems more likely and more fraught with peril than creating sentient AI.
The first step in this process may be the implanting of cellular devices for communication (which I don't think sounds that far-fetched). Instantaneous, unlimited, at-will communication between individuals. Will humanity develop into a more machine-like hive-mind, essentially foregoing individuality for safety, security, and convenience?
Also tangentially related to this issue is the concept of collective consciousness. It's very possible that by the time sentient AI is developed, the planet will be blanketed by a network connecting all computers together. Will a sentient AI be able to divest itself of a collective consciousness (essentially a prescribed rule-set). Can a sentient AI be "human" if it is also part of a collective consciousness?
Individuality is at the heart of what it is to be human, for better or for worse.
First off, I'd like to say that this idea is one of the few concepts for an RPG that I actually find fascinating in that it has unlimited potential for gameplay and storytelling capabilities. I love the concept that beings which don't completely understand humanity (if we can't, how could our creations) attempt to become more human. Thematically, it opens the door for questions about just what is it that makes us "human".
For example, is a factory filled with workers who repeat the same motions throughout their entire lives any more human than a computer which has endless possible responses to user input? Or, in another light, which was more robotic, HAL 9000 or the Nazi "war machine"? I'm just personally fascinated by themes which deal with traits of humanity when compared to technology, so yeah, enough with my Bladerunner fantasies. Anyways, back to you.
With regards to your worldbuilding, I think that the player should be able to go between the fruitful virtual world, and the barren real world. The player can upgrade their character with downloads and hacks in the virtual world, but they can also find attachments and fuel in the real world. I think by having the player visit both worlds, it would help create a contrast between the two and add even more thematic and gameplay possibilities.
I agree with you that there shouldn't be any overarching "good vs evil" story typical of many RPGs. Instead I think the stories should be more personal and character driven. I can see a lot of the stories being a lot like stories common in westerns; there's no real law of the land (due to the immense freedom of virtual reality), and a general sense of anarchy (but not chaos) pervades the game.
I see your world being a lot like the futuristic society of Neuromancer. In that it's a world where users can pretty much modify their bodies as they please, providing they're skilled enough to handle the "impants" or "downloads".
All in all, great initial concept. I'd love to see this type of world come to life. Only problem I can see is that there's so many branches to explore, it could take you a while to iron out specific gameplay aspects. But it's definitely better to have to sort out to many than not enough.
For example, is a factory filled with workers who repeat the same motions throughout their entire lives any more human than a computer which has endless possible responses to user input? Or, in another light, which was more robotic, HAL 9000 or the Nazi "war machine"? I'm just personally fascinated by themes which deal with traits of humanity when compared to technology, so yeah, enough with my Bladerunner fantasies. Anyways, back to you.
With regards to your worldbuilding, I think that the player should be able to go between the fruitful virtual world, and the barren real world. The player can upgrade their character with downloads and hacks in the virtual world, but they can also find attachments and fuel in the real world. I think by having the player visit both worlds, it would help create a contrast between the two and add even more thematic and gameplay possibilities.
I agree with you that there shouldn't be any overarching "good vs evil" story typical of many RPGs. Instead I think the stories should be more personal and character driven. I can see a lot of the stories being a lot like stories common in westerns; there's no real law of the land (due to the immense freedom of virtual reality), and a general sense of anarchy (but not chaos) pervades the game.
I see your world being a lot like the futuristic society of Neuromancer. In that it's a world where users can pretty much modify their bodies as they please, providing they're skilled enough to handle the "impants" or "downloads".
All in all, great initial concept. I'd love to see this type of world come to life. Only problem I can see is that there's so many branches to explore, it could take you a while to iron out specific gameplay aspects. But it's definitely better to have to sort out to many than not enough.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement