User Name: Percival, James User ID: ID#0001 User Access Code: #0a78bc Topic: 'Specimen' - Test #1
This is James Percival speaking, audio log entry number one on Pueblo Station, Colorado. An hour ago our TP-100 android model, the first of his kind, has successfully landed the Rhino transport 'Specimen' after its virgin ride. I have reviewed communication and navigation logs that the TP-100 has saved in its memory and will now draw a conclusion from the seemingly successful test.
Now, let me start with the route. We wanted the TP-100 to fly a medium to long-distance flight with a light freighter capable of defending itself in case anything happens. The route had to be pre-planned so we could analyze how, when and why the android would make any changes to it. The launching point was set on this very station, as part of a regular trade agreement with Rheinland, we loaded 500 tons of Superconductors on the ship and gave the android the go. The route was flown as intended, no shortcuts, only trade lane connections going from a to b with as little interruptions as possible. While the navigation program seemed to be executed perfectly, the freighter was almost instantly stopped by a pirate in the New York system. Luckily, the freighter was not a target and was allowed to move on.
This is where we encounter the first... let's call it problem. The pirate seemed to have disrupted the entry lane of the Colorado gate, making it unable to execute the docking program the TP-100 is equipped with, as it can be seen in the memory.
ERROR: Unable to execute docking procedure, trade lane is disrupted. ERROR: Unable to execute docking procedure, trade lane is disrupted.
That means that the android was not able to realize it could use the next lane entry, although we coded it that way, this needs further reviewing.
Apart from that the flight towards Munich has gone without issues. One thing I have to add is that despite the social program we implemented, the TP-100 did not communicate with any other pilot on its way, well, not on its own. Considering the encounter with the pirate I think it is expectable that it could interact with another pilot by saying hello if it was addressed first, however, this did not occur in this very instance, we'll have a look at this during the next flight.
Now, to the few minor issues we've discovered during the analysis. Since the android was sent to Newcastle, we knew that it would have to calculate a way that is not a jump gate, because, you know, there is none, not anymore. So, what we did was sending it to Tau thirty-one and see how it would react. To my relieve, it managed to calculate, that it could also take a jump hole to its destination, the issue here is that first, it took a while, which, in hostile territory, is quite a danger to consider. Then, it also failed to maneuver around some of the ice rocks in Tau thirty-one, meaning the navigation program isn't running fast enough to calculate a way around fast oncoming objects in space.
The next step is testing whether the TP-100 can actually use its defense program correctly, we haven't yet specifically tested this out but I am quite sure it'll go well, we used a code that's been in use on some capital ship weapons that fire automatically but we'll improve on that after I get a report. Additionally, we'll have to see that the android gets to communicate with another pilot. It did not communicate with passing pilots, well, not by itself, although considering it did talk to the pirate, the rest of the communication should work just as fine.
Should the continuing tests be successful we'll run the rest of the program and see if it passes the Turing test. Log end.
User Name: Percival, James User ID: ID#0001 User Access Code: #0a78bc Topic: 'Specimen' - Test #2
James Percival audio log number two. We've witnessed a lot of progress with our observations on TP-One-Hundred's behavior. The second test was supposed to determine wether the defense system is running smoothly as well as testing the social program. Let me start with what we've seen about the defense program. So, the android was tasked to roam New York in search of a threat, we wanted it to stay passive, though, as in, no engagements by its side, merely provoking a hostile attack through its presence. That worked pretty well. We were lucky, a Rogue ship appeared and attempted to pirate the freighter. The android, of course, denied this and warned the Rogue that any aggression would result in an open attack. That was after the Rogue fired several cruise disruptors. When the Rogue fired another missile, it successfully triggered the system and the android engaged in self-defense. I want to add that we specifically aimed to let it take aggression to some extent. The android had a choice here, it could've also attacked only after it could've taken damage, but the logs and results show that it calculated what would be smarter and got to the conclusion that attacking after giving several warnings would increase its chance of survival.
The fight was won as well, which is really convenient, would have taken some days to put another TP model together. *A 3 second pause* The thing with what I said earlier is kind of bugging us. Surviving. The android did what it had to do in order to survive. We want it to do that. But it has this sense of... strangeness attached to it. Of course it has no natural instinct or will to survive, we simply programmed it to keep its system up and it knew it had to destroy the attacker in order to do that. But... well, what else it could do in order to survive... While we did take the precautions the ethical and philosophical question remains behind this but I strongly believe in our development, it showed too many positive results so far. Anyway, let's get to the second point.
The social program test was the most interesting observation I've made with TP-One-Hundred. We expected it to win the fight it was thrown into but the way it handled the social interaction was... a little strange to say the least. Of course, there is no one hundred percent pre-planned response to everything a human says. The social program is one of the very large spectrums in which it can act. First off, the interaction was with a Xeno Alliance pilot, as we wanted it to interact with anyone possible, no matter the affiliation. It was very direct in its intention and told the Xeno right off that it wanted to test its social program. The pilot was confused, naturally. For a moment I thought there would be another fight but luckily the Xeno refrained from opening fire. The conversation started off with the Xeno wondering about the android, the android, calmingly, yet very stiffly, responded to the Xeno, here are the logs.
XA-James.Locke: What in the hell?
XA-James.Locke: Is that a bot?
FTI>'Specimen'-A: TP-100: I am a TP-100 android model, would you help me executing my social program?
XA-James.Locke: Uh....Excuse me?
XA-James.Locke: An android??
FTI>'Specimen'-A: TP-100: Excuse me, there must be an error in my dialogue program, one second, running diagnosis.
FTI>'Specimen'-A: TP-100: I couldn't find any errors, would you detail the problems you have with my speech?
XA-James.Locke: The hell do ya mean Social Program?
FTI>'Specimen'-A: TP-100: The social program assists me in developing social bondings to the humans I work with.
FTI>'Specimen'-A: TP-100: Without it, there is no humane conversation possible.
XA-James.Locke: *sighs* Damn Tincan
As we can see, the android is being very polite but also shows the classic robot speech. Since it is reciting pre-created response that is alright. The conversation continued normally. When the android mentioned that it was currently in a defense-module testing phase, the Xeno backed off a little, expecting it to open fire. It casually defused the situation by explaining that it does currently not have the permission to engage by itself.
XA-James.Locke: Ya tryin to replace human workers alltogether?
FTI>'Specimen'-A: TP-100: I'm sorry, I am the first prototype of this production series.
FTI>'Specimen'-A: TP-100: I was built to assist humans with their work.
XA-James.Locke: And why in the blazes cant a good Libertonian Citizen do that job?
FTI>'Specimen'-A: TP-100: Anyone could, I am merely here to test if all my programs are operating the way they should.
FTI>'Specimen'-A: TP-100: So far, there have been pleasing results.
FTI>'Specimen'-A: TP-100: My defense and social module require more testing.
XA-James.Locke: Does pleasing results involve in causin me a damn migraine?
XA-James.Locke: Wait...wait
XA-James.Locke: Defensive Module?
FTI>'Specimen'-A: TP-100: No, but there are few humans around to interact with, I understand that my presence is an inconvenience in space.
XA-James.Locke: Listen here Tincan if your gonna take a shot at me you better not miss
FTI>'Specimen'-A: TP-100: Right now I am not supposed to fire at anything that doesn't fire at this ship.
Interesting in this part is that the Xeno issued a form of complaint due to something the android did, so it naturally knew that it could be a possible inconvenience, it understands that it can be an annoyance to certain humans, especially those that would despise it. The android continued by initating the standard calibration program it runs through when used for the first time. Typical test questions to see whether it can evaluate the best response, personalizing by asking for a name and so on. The Xeno used that opportunity to mock it by naming it Tincan but it just said it would skip this step if the result would be useless.
What we noticed is that the longer the conversation went, the less robotic the speech became, the android seemed to adapt to what the Xeno said and try to distract the pilot from thinking that they were talking to a machine. The most interesting part of the conversation started once the android told the Xeno about the Turing Test. The pilot asked the question "What is love?", probably one of the most complicated questions to answer if you're not going to do it in a scientific way.
FTI>'Specimen'-A: TP-100: Now, I can answer this scientifically or I can do it emotionally.
FTI>'Specimen'-A: TP-100: Of course, in a real test, I wouldn't ask this question and just pick one option.
XA-James.Locke: How would a Human answer it?
FTI>'Specimen'-A: TP-100: Well, a human, let's say, an emotionally very capable one, would probably describe it as as feeling of emotional--
FTI>'Specimen'-A: --and physical desire for another person.
FTI>'Specimen'-A: TP-100: That desire can be felt in many different ways.
FTI>'Specimen'-A: TP-100: Love is hard to explain.
XA-James.Locke: *Lets ou a low whistle* Not half bad
While the answer itself was really good, the way it was delivered needs some improvement. This can be applied to most answers, this stiff form of talking is difficult to balance, as we, on one side, need the android to emphasize and articulate itself in a very formal, humble and respectful manner but in a way that doesn't instantly tell that it is an android. I will pass these results on to our lab and then we'll continue working that out. Log end.
User Name: Percival, James User ID: ID#0001 User Access Code: #0a78bc Topic: 'Specimen' - Test #3
Alright, log number three for the TP-100 Prototype. Unfortunately, the android has been destroyed by a Lane Hacker, well, two of them. It was our fault we wanted the program to ignore major threat situations, so it followed the Lane Hacker through the Badlands. At least the defense module worked fine, it's hard to determine if it could've ended differently if there was only one Lane Hacker. At some point, the android did seem to retreat only to stop and provoke an attack. The conclusion is pretty obvious, no more "zero threat recognition".
But I'm not disappointed, this just had to happen and we kind of asked for it. Now we got to commission a second TP-100 model and be a little more careful. Although the first one did a really good job, the tests went fine and we improved the social module as I said in the last log. The very last thing I'd like it to do would have another conversation with a human, this time, without the whole initiation process and the more unnatural things. After that, we should be good to go.
This model will lay the foundation for all future android models we will develop if this base software works, we can start working on dozens of other models... as long as they're being sold, that is. I hope the next log will conclude the last social program test so we can finally do the Turing Test.
User Name: Percival, James User ID: ID#0001 User Access Code: #0a78bc Topic: 'Specimen' - Test #4
With this entry I'll, most likely, draw the conclusion with our prototype. There has been a lot uh... interesting encounters since the last report, most of which I'd love to present but they are, in fact, unnecessary at this point. The systems of TP-One-Hundred have proven to be successful and it was the right decision to get to our last step. And, uh, yes.
Now, as I said last time, I needed another encounter with a human, a more natural one. That proved to be harder than expected. While the android made a lot of attempts to talk to other pilots, it didn't get the attention it required. Obviously a lot of people are just busy and have no time for a test of which they have no knowledge of but it kept trying. In the end it landed, again, next to a Xeno pilot. I must admit, I was a little surprised at first, the Xeno was very talkative in his own way and didn't threaten the android at all, I'm not even sure if they knew it wasn't a real human. Anyway that's not important. Since Xenos are not known for their philosophical conversations, it was quite obvious that we wouldn't get the perfect results for this but they're results we can work with. It may not have been a deep discussion about morally questionable topics but it was economics and overall, the cause of the Xenos.
Obviously it didn't convince the Xeno to quit their chosen way but I think the conversation might have had a little impact. This part was really interesting in that regard.
FTI>'Specimen'-A: TP-100: If you had a special ability that could help a nation and you provide it to Kusari instead of Liberty, haven't you--
FTI>'Specimen'-A: TP-100: helped Kusari then?
XA-Massasauga: Why would I give it to them?
FTI>'Specimen'-A: TP-100: Maybe it's the only opportunity, maybe you don't have this opportunity at home.
XA-Massasauga: This is going nowhere to
XA-Massasauga: Tho...
FTI>'Specimen'-A: TP-100: I don't think I could change your view anyhow.
FTI>'Specimen'-A: TP-100: And that's not my intention.
XA-Massasauga: That's a good sign.
Instead of disproving what the android said, the Xeno ended this part of the discussion, this could be a sign of realization. The android knew that humans don't like to be proven wrong so it quickly explained that changing their views was never its intention. It's not guaranteed that the Xeno would have reacted aggressively but the chance was definitely there.
What really bothers me, not in a negative way, is that the android asked a really, and I mean, really, interesting question. It asked what the Xeno feared more, a foreign person, or just their culture. I wish there wasn't an interruption after this question, it could have led to an interesting point of the conversation. But the very fact that the android asked a difficult question by itself largely confirms that our programming works as intended.
Therefore I call the Specimen-Test a success and we're moving on to phase two. The Turing Test.
I have to add something important, though. Considering the current state between Liberty, Bretonia and Gallia, we'll not officially found the company yet. It seems like a bad economic state to start doing business and we're going to wait until the situation has calmed down a little more. I'm sure both the Bretonia Armed Forces and the Liberty Navy would like combat androids in their ranks, the uh- plans for them have already been made. But right now we just lack the means to produce them. As soon as we get our hands on production facilities, we'll start mass-producing them. We'll also not contruct another android model at the moment. Next up would be security pilots for companies but that'll have to wait.
I think the best idea is to simply let the TP-One-Hundred roam around a little more and collect as much data as possible. Anything suspicous in its behavior and we'll fix it. Maybe it's a good idea to wait a little, even, it provides us with time to perfect its systems. I'll report back if anything strange happens, other than that I consider this first step completed.
User Name: Percival, James User ID: ID#0001 User Access Code: #0a78bc Topic: 'Specimen' - Beta Version
Didn't think I had to do this but something in my guts told me I would. I know I let the android continue to fly on its own, that, obviously, bears dangers. But it being destroyed by a bomber again is just annoying at this point. It's the second time and now we'll have to deploy yet another TP-One-Hundred. I really, really hope that this is the last time it's going to be destroyed, we can't afford much more failures. The Grizzlies we're paying for are getting expensive and so are the materials we need to construct the machine. But seeing how TP-One-Hundred is constantly dragged into fights just irks me. We really need to work on security androids for companies next. Here's the thing, the android fights well but despite that it's a bomber target, if I just had more time and money to build a security prototype I'd- ugh, it's infuriating.
I mean, was it the right idea? In a hostile environment, fast ships are needed, maybe we should've gone with security androids before a transporter model. But I mean, they're going to be the biggest help, if any android is able to assist companies in their work, it'll be the transporters. Then again, security is so neglected, I bet we'd contribute a lot with a functioning security android series.
The longer the war goes on, the more donators may drop off, if we lose our scarce ressources now, FuTech is over before it even started. Either the war is going to be over soon and we'll start with selling androids, or we'll have to be risky and do it despite the war. I am actually going to prepare for the latter. We'll build another TP-100 and make sure he's staying the hell away from bombers no matter what and then forget about it as much as possible. The time we've used on TP-One-Hundred should be devoted to security and military models.
So, let me reorganize my mind real quick, transport model, security model, navy model, police model, household assistance model. These should be the top priorities right now, after that and only if successful, we should try to develop all sorts of models. The best idea is to focus on a perfect coding for the security and military android, I'll contact a representative of the Navy in some time. Now, let's focus on the other models. Also, I'll scratch what I said in the log before, there will probably be another entry soon.