Page 1 of 1

The Thing about Programming

Posted: Sun May 27, 2012 6:26 pm
by Tobiofan
In the 2003 series, it is often mentioned how robots have to follow their programming, despite the fact they can make their own decisions. The fact that these robots keep saying it, even when they actually aren't following it at that moment, keeps confusing me. Technically humans have "programming" as well, though we refer to it as instinct. Even still, we don't often follow it. So then why is it such a a big deal to robots (excluding Astro)? If they knew that humans got past it a long time ago, wouldn't they try to over come it as well? :confused: This is just me thinking out loud, so whats your opinion on the matter?

Posted: Sun May 27, 2012 8:03 pm
by atomicrush
"Tobiofan" wrote:In the 2003 series, it is often mentioned how robots have to follow their programming, despite the fact they can make their own decisions. The fact that these robots keep saying it, even when they actually aren't following it at that moment, keeps confusing me. Technically humans have "programming" as well, though we refer to it as instinct. Even still, we don't often follow it. So then why is it such a a big deal to robots (excluding Astro)? If they knew that humans got past it a long time ago, wouldn't they try to over come it as well? :confused: This is just me thinking out loud, so whats your opinion on the matter?


I think robots can't really over come the programming because their masters have to get rid of it for them unfortunately therfore they can't think for themselves. Robots like Astro doesn't have programming but in the movie I think he does.

Posted: Sun May 27, 2012 8:35 pm
by AprilSeven
Great topic!

You know, I'm not certain it is ever made entirely clear as to whether or not Astro has COMPLETE free will. He does "break the robotic laws" here and there - like when he traveled outside the country to help the people on Sea Serpent Island - so maybe he does choose to control himself.

I figured most robots would have obedience to the robotic laws programmed into them - it would be logical, in fact, if that were the law.

The reason I'm guessing that Astro DOES have some of that "protective programming" is because Atlas has the "Omega Factor" - which "allows" him to commit crime and harm humans in response to his own emotions and directions from others - this does NOT happen with Astro.

I don't know what to think about Blue Knight - maybe I missed something, but he doesn't hold back from harming humans either - I don't remember hearing that he had the Omega Factor, so I can't explained why he's decided to bring physical harm to humans, if he feels it's necessary in the fight for robots rights.

Then there's the many different "classes" of robots. A robotic machine that directs traffic or works in a power generating facility is presumably quite different from Astro, who is - for all intent and purposes - a sentient creature, albeit man-made. I mean - are humans being "cruel" to expect machines built to do particular jobs without days off, and "human rights?" It seems like "humanoid"/sentient robots are the only ones for whom "rights" would be an issue.

I mean, it makes sense to take care of a robot, to make sure it's operating properly, but in Tezuka's world, he seems to have a great many robots who are essentially "humans" who do the work people used to do, only the robots are treated as slaves, and not human-equivalents.

I have often wanted to see a continuation of Astro's story where he actually becomes human (let's say Tenma or Elefun managed to clone/age Tobio's body and "download" Astro's memories into it). I think it would be fascinating to see how Astro would behave/react as a human. :cool:

Posted: Mon May 28, 2012 2:44 am
by Nora
"AprilSeven" wrote:Great topic!

You know, I'm not certain it is ever made entirely clear as to whether or not Astro has COMPLETE free will. He does "break the robotic laws" here and there - like when he traveled outside the country to help the people on Sea Serpent Island - so maybe he does choose to control himself.

I figured most robots would have obedience to the robotic laws programmed into them - it would be logical, in fact, if that were the law.

The reason I'm guessing that Astro DOES have some of that "protective programming" is because Atlas has the "Omega Factor" - which "allows" him to commit crime and harm humans in response to his own emotions and directions from others - this does NOT happen with Astro.

I don't know what to think about Blue Knight - maybe I missed something, but he doesn't hold back from harming humans either - I don't remember hearing that he had the Omega Factor, so I can't explained why he's decided to bring physical harm to humans, if he feels it's necessary in the fight for robots rights.

Then there's the many different "classes" of robots. A robotic machine that directs traffic or works in a power generating facility is presumably quite different from Astro, who is - for all intent and purposes - a sentient creature, albeit man-made. I mean - are humans being "cruel" to expect machines built to do particular jobs without days off, and "human rights?" It seems like "humanoid"/sentient robots are the only ones for whom "rights" would be an issue.

I mean, it makes sense to take care of a robot, to make sure it's operating properly, but in Tezuka's world, he seems to have a great many robots who are essentially "humans" who do the work people used to do, only the robots are treated as slaves, and not human-equivalents.

I have often wanted to see a continuation of Astro's story where he actually becomes human (let's say Tenma or Elefun managed to clone/age Tobio's body and "download" Astro's memories into it). I think it would be fascinating to see how Astro would behave/react as a human. :cool:


I agree sooo much.

There is an old episode form the 60s or 80s don't remember which. But Astro gets fitted with true fear and couldn't continue on with his hero work anymore because he was to afraid to fly. So in thinking this I'm not to sure Astro ever really is afraid.

The base AI is a learning basis. Meaning if a robot fitted with AI will see someone wanting to be free and tend to want to be free themselves. Only because it wants to be closer to human.

A core program will prevent certain robots to stop from doing certain things like the lesser robots like those of the work force. but those fitted with AI or Kikoro depending on what series your going for. Will tend to fight basic programming like wanting to be free/fight/love...those sort of things. It explains a lot of this in the Manga Pluto.

Posted: Wed May 30, 2012 3:25 am
by jeffbert
I think that programming as it seems to be understood by the previous posts is not quite the right word to use in reference to Tezuka's robots, at least in the MIGHTY ATOM corpus. Though we say that Atom, because he lacks the OMEGA FACTOR, cannot harm humans, the same could be said of BLUE BON. Yet, when he saw Count Burg's treatment of his siblings, he 'snapped'; that is though his very nature or personality would otherwise never have even considered violence against humans, he immediately began to hate them all, & intended to harm them. I think this is similar to a man who was raised as a pacifist, he would never, under normal circumstances, consider violence as a means to an end. But seeing his entire family torn to pieces may just cause him to rethink his dedication to pacifism.

Though it was post-Tezuka, the 2003 episode titled Undercover showed Atom cut the ledge away that would have crushed those humans below it, had they not moved. I am sure that he would have rescued any, whom he felt were not able to avoid the falling rock. But this action shocked those who witnessed it, as they had been sure he was incapable of such a thing, perhaps, even Atom himself, was shocked by his apparent disregard for humans' safety.

Thus, though programmed, there is a certain amount of stress that can override such programming. Asimov's robots, on the other hand, could not function in any capacity, if their positronic brains should be corrupted. But Tezuka & Asimov had very different reasons for writing about robots. Tezuka used them as metaphors for the servant class, perhaps, it is better to say, those who suffered discrimination. I think Asimov was simply writing scifi for its own sake.

Posted: Wed May 30, 2012 4:42 am
by Nora
Perhaps the word 'harm' isn't the right word...'kill' is a better word for it. We see Astro harm bad guys all the time...but he never kills them.

Posted: Wed May 30, 2012 12:57 pm
by AprilSeven
@ Jeffbert - I'll have to take a look at that episode (not sure if I saw it). But the question is (for me, at least) - was the "Omega Factor" a device that essentially "over-rode" the robotic laws and permitted the robot to "act inappropriately" as it desired?

And if so, how could Blue Bon and Atom respond with robot law breaking behavior (or maybe the Omega Factor isn't "needed" for the kind of response they had, which was a reaction to cruelty (whereas Atlas was acting out of pure criminality and disregard for human life)

Posted: Thu May 31, 2012 5:48 am
by diehard67
personally I believe that astro chooses not to kill or hurt people when he can get away with it.

tenma said that he created a robot that could defy humanity so I don't think his programming restricts him at all, it is his own morals that do that.

Posted: Sat Jun 02, 2012 2:26 am
by jeffbert
"Nora" wrote:Perhaps the word 'harm' isn't the right word...'kill' is a better word for it. We see Astro harm bad guys all the time...but he never kills them.


I do not see him breaking bones, just clobbering the bad guys, until they cease resistance. Injure, perhaps, but mildly, if at all.