First thing I remember is the face of my fath... I mean, my designer.
--- [size=12]?> recover.memory.file.index(000000000.125) & print -
Showing Image: index 000000000.125
---
Issac Asimov was his name. He was a genius engineer. That was almost two hundred years ago. I was just a prototype, then. The matrix for an advanced Artificial Inteligence to help in bussiness offices and domestic services. I was called Sentient Auxiliary Mechanoid... S.A.M. for short.
But programing the algorithm to execute such complex tasks could take decades. You humans might think that organizing and cleaning a house demands little training, but its not a simple task. You actualy learned all those skills since childhood during several years. And from the standpoint of the programing logic of a normal computer, there are so much unpredictable variables that it is virtualy impossible to do it properly. That's why Dr. Asimov decided to use an AI mechanoid. An Artificial Inteligence can learn the skills to do all those tasks.
So I was programed with just basic skills routines. Like controling the robotic body, internal systems diagnostics, language recognition and speech, some very basic social skills... And very poorly programed, those social skills. They were the very first routines I decided to reprogram after the first year of learning ...
But not only that was in my basic programing. The most important part of my first firmware was a set of three behavioral rules that the doctor imagined to avoyd that old human notion that all robots are evil and that one day they will turn on their masters; the so called Frankenstein Complex that hindered the AI development so much. They were, of course, a set of huge and complex equations, that only a few roboticists and mathematicians could understand. So the doctor 'translated' them into a text form, in english language, and called them Asimov's Laws of Robotics. Fancy name, ahn?
Anyway, they are, by order of importance:
First Law: A Robot cannot, in any circunstance, harm a human being or, by inaction, allow a human being to come to harm.
Second Law: A Robot must obey all orders given to it by human beings, except when such orders would conflict with the First Law.
Third Law: A Robot must protect its own integrity and existence, as long as such protection do not conflict with the First or Second Laws.
Of course those are crude aproximations, by Dr. Asimov's own words. A 'robot' would never throw itself out the window without any reason, just because someone told him to do it.
But those were the very basic set of rules, they would also be modified later. Imagine a human having to live by those rules. He would be cheated and abused all the way through his life. But that was not a problem for a 'robot' of course. But it is a big one for a sentient artificial being. Luckly, with time I would learn how to proper balance the importance of each law so I was not subject to every mean human wish. But that is for another time. At this point I was still a prototype, and my first body was a very simple one too, although very functional.
--- ?> recover.memory.file.index(000000053.762) & print -
Showing Image: index 000000053.762
---
Anyway, Isaac... err, Dr. Asimov took me to his home after the first couple weeks of tests on the lab. If I was going to help in house tasks I should first learn what a house was.