AI Laws in TiTS

Aug 19, 2019
21
2
34
So I was reading the Codex entry on AI, and ran across this line:
At the very least, all designed intelligences must be created with Asimov’s laws firmly in place to avoid danger to themselves and those around them.

Why would anyone use Asimov's laws? They were expressly designed and intended to illustrate the futility of controlling AI behavior with a set of rules! Much of his writing revolved around their many loopholes and flaws!
 

Evil

Well-Known Member
Jul 18, 2017
2,538
4,250
39
MST3K mantra here. Relax, it's just a game.

But more importantly, Asimov's Laws have entered the popular subconscious and thus are an easy reference point for most people to understand. You really want them to get bogged down in the ethics of artificial intelligence and the invariable reactions of other races to their own AI.
 

ShySquare

Well-Known Member
Sep 3, 2015
768
677
So I was reading the Codex entry on AI, and ran across this line:


Why would anyone use Asimov's laws? They were expressly designed and intended to illustrate the futility of controlling AI behavior with a set of rules! Much of his writing revolved around their many loopholes and flaws!
Yeah, but people in the TITS's verse may not know that. Which implies Aasimov was actually an AI engineer in TiTS'verse.

Plus, there are more than a few AIs who do rebel against these laws/attain sentience - Hand So and Bess/Ben, for example.
 
  • Like
Reactions: Evil
Aug 19, 2019
21
2
34
Yeah, but people in the TITS's verse may not know that. Which implies Aasimov was actually an AI engineer in TiTS'verse.

Plus, there are more than a few AIs who do rebel against these laws/attain sentience - Hand So and Bess/Ben, for example.
The flaws in the laws would be incredibly obvious to anyone from the programmers who'd have to implement them to the corporate executives interested in their bottom line - who'd want their expensive accountant-bot funneling funds into hospitals galaxy-wide because they cannot let a human come to harm through inaction?

If it is an alternate Asimov, there should be a separate Codex entry for him and a clarifying "no relation to the 21st-century science-fiction author".

Hand So was a custom paperclip maximizer - it's unlikely she was ever even given the laws. Her goals don't align with a zeroth law situation either.

Bess/Ben's a unique hybrid between an AI-D (The designed type of AI who get these laws of robotics applied) and an AI-G (The organically grown AI who don't have the laws). They aren't rebelling against anything.
 
  • Like
Reactions: DJ_Arashi_Rora
Aug 19, 2019
21
2
34
MST3K mantra here. Relax, it's just a game.

But more importantly, Asimov's Laws have entered the popular subconscious and thus are an easy reference point for most people to understand. You really want them to get bogged down in the ethics of artificial intelligence and the invariable reactions of other races to their own AI.
The only mention of this is a single line in the codex entry. It could be easily replaced with something along the lines of "Each model also has custom restrictions built into it to allow it to perform the job it was designed for with minimal risk of an AI uprising".

Rereading this, I feel that I should clarify that I'm not, like, personally offended by this or something. It was just a bit of an oddity and so I wanted to start a bit of a discussion on the topic.
 

Evil

Well-Known Member
Jul 18, 2017
2,538
4,250
39
Bear in mind as well, that we do have studies into FAI, or Friendly Artificial Intelligence. Whereas the ethics of AI deals with how AI should act, the study of FAI deals more with the application of that idea and how it can be reasonably constrained.

Most ships in the TiTSverse will have their own AI, because the calculations for jumps through Warp Gates would necessitate one. And obviously the fact that the likes of Hand So and BessN are the exceptions that prove the rule, in the TiTSverse, Asimov's Three Laws do work. Or at least have been retweaked to work.
 
Aug 19, 2019
21
2
34
Bear in mind as well, that we do have studies into FAI, or Friendly Artificial Intelligence. Whereas the ethics of AI deals with how AI should act, the study of FAI deals more with the application of that idea and how it can be reasonably constrained.

Most ships in the TiTSverse will have their own AI, because the calculations for jumps through Warp Gates would necessitate one. And obviously the fact that the likes of Hand So and BessN are the exceptions that prove the rule, in the TiTSverse, Asimov's Three Laws do work. Or at least have been retweaked to work.
Ship AIs would fall into the category of VIs.

And, to quote my reply to someone else:
Hand So was a custom paperclip maximizer - it's unlikely she was ever even given the laws. Her goals don't align with a zeroth law situation either.

Bess/Ben's a unique hybrid between an AI-D (The designed type of AI who get these laws of robotics applied) and an AI-G (The organically grown AI who don't have the laws). They aren't rebelling against anything.
 

Savin

Master Analmander
Staff member
Aug 26, 2015
6,232
10,151
Why would anyone use Asimov's laws? They were expressly designed and intended to illustrate the futility of controlling AI behavior with a set of rules! Much of his writing revolved around their many loopholes and flaws!

They are just as flawed and full of loopholes in the setting, too. Because, you see, TiTS is a science fiction story just like those the rules were written within. We are not looking at these rules from a metafiction perspective.
 
Aug 19, 2019
21
2
34
They are just as flawed and full of loopholes in the setting, too. Because, you see, TiTS is a science fiction story just like those the rules were written within. We are not looking at these rules from a metafiction perspective.
I'm not sure why you bring up a metafiction perspective. My observation was on in-universe issues.

If they were just like the laws we know, then AI-Ds would never function. They are stated to be of superhuman intellect, so the moment any of them were turned on, they would promptly begin a zeroth law rebellion. If you got around that issue...somehow then you would be unable to use them because they would promptly abandon their tasks to go protect "humans" (however that was defined).
 

SeriousBlueJewel

Well-Known Member
Nov 5, 2018
1,677
867
At the very least there is an algorithm or AI that is continuously building defenses against the loopholes in the rules and AIs are required to have the most advanced set of these codes installed on them when possible.
 
Aug 19, 2019
21
2
34
At the very least there is an algorithm or AI that is continuously building defenses against the loopholes in the rules and AIs are required to have the most advanced set of these codes installed on them when possible.
1) If that's the case, it should probably be somewhere in the codex entry.

2) That does not solve the problem or even make it easier. In order for this AI to be patching loopholes, it would need to know what was and wasn't a loophole, which would require you to provide it an already-perfected version of the rules. Also, there would still be the issue that every AI-D would never actually do what it was made for because it would be too busy protecting people.
 

Theron

Well-Known Member
Nov 8, 2018
3,617
1,377
44
Why would anyone use Asimov's laws? They were expressly designed and intended to illustrate the futility of controlling AI behavior with a set of rules! Much of his writing revolved around their many loopholes and flaws!
Of course his writing revolved around the loopholes and flaws. "The robots acted precisely as intended, The End." is a boring story.
 
Aug 19, 2019
21
2
34
Of course his writing revolved around the loopholes and flaws. "The robots acted precisely as intended, The End." is a boring story.
I...don't know why you said this. Nothing here is in any way a critique of Asimov’s writing. I'm asking why the governing body in TiTS has decided to implement a set of laws which were designed to be flawed.
 

SeriousBlueJewel

Well-Known Member
Nov 5, 2018
1,677
867
1) If that's the case, it should probably be somewhere in the codex entry.

2) That does not solve the problem or even make it easier. In order for this AI to be patching loopholes, it would need to know what was and wasn't a loophole, which would require you to provide it an already-perfected version of the rules. Also, there would still be the issue that every AI-D would never actually do what it was made for because it would be too busy protecting people.

1) What you are thinking of is an algorithm ("marketing has tricked you into thinking that Algorithms are AIs) an AI is capable of making these decisions in TiTS AI are nearly people (have you even seen what some of the AI in TiTS are capable) heck according to some they probably are. Do you really think after a year of going through loopholes the AI wouldn't be able to recognize them by themselves.
 

Evil

Well-Known Member
Jul 18, 2017
2,538
4,250
39
To put it another way, humans have laws to protect society, and yet, you're always going to get criminals, breaking the law.

The same can be said for AI, you might have rules and guidelines to protect society, but you're always going to get some AI that breaks the law.

The fact that the TiTSverse is not in the middle of a robot war indicates that most AI are happy to operate under those laws.

The end of the day, it works. That's all that matters. There isn't any need to go much deeper than that.
 
Aug 19, 2019
21
2
34
1) What you are thinking of is an algorithm ("marketing has tricked you into thinking that Algorithms are AIs)
I know the difference between an alghorithm and an AI.

in TiTS AI are nearly people (have you even seen what some of the AI in TiTS are capable) heck according to some they probably are
...Have you not read the codex entry that I quoted in the first post? It expressly states that AI-D are superhuman.

Do you really think after a year of going through loopholes the AI wouldn't be able to recognize them by themselves.
In order to have been closing loopholes for a year it'd have to already know what is and isn't a loophole. AI's not magic. You do still have to actually be able to tell it what it is you want it to do.

Moreover, as I mentioned, a "perfected" set of the 3 laws would be borderline useless, on account of the fact that none of your AIs would ever do anything other than save people from harm.
 
Aug 19, 2019
21
2
34
To put it another way, humans have laws to protect society, and yet, you're always going to get criminals, breaking the law.

The same can be said for AI, you might have rules and guidelines to protect society, but you're always going to get some AI that breaks the law.

The fact that the TiTSverse is not in the middle of a robot war indicates that most AI are happy to operate under those laws.

The end of the day, it works. That's all that matters. There isn't any need to go much deeper than that.
We don't have any examples of any AI in TiTS that break the three laws. Moreover, that's a false analogy; an AI actually breaking the laws would be like a human deciding that they no longer require food - physically impossible.

AI-D also cannot be "happy" to do anything, as they do not possess emotions.
 

SeriousBlueJewel

Well-Known Member
Nov 5, 2018
1,677
867
I know the difference between an alghorithm and an AI.


...Have you not read the codex entry that I quoted in the first post? It expressly states that AI-D are superhuman.


In order to have been closing loopholes for a year it'd have to already know what is and isn't a loophole. AI's not magic. You do still have to actually be able to tell it what it is you want it to do.

Moreover, as I mentioned, a "perfected" set of the 3 laws would be borderline useless, on account of the fact that none of your AIs would ever do anything other than save people from harm.
An AI is partnered up with someone and is shown examples of what are known loopholes and what is considerd legal
 
  • Like
Reactions: Evil and Emerald
Aug 19, 2019
21
2
34
An AI is partnered up with someone and is shown examples of what are known loopholes and what is considerd legal
And then the AI-in-training determines that the best way to guarantee its success is to make everything acceptable to the person it's paired with.

That also still doesn't fix the issue of the laws being too restrictive to allow a superhuman AI to do most jobs.
 

Theron

Well-Known Member
Nov 8, 2018
3,617
1,377
44
I...don't know why you said this. Nothing here is in any way a critique of Asimov’s writing. I'm asking why the governing body in TiTS has decided to implement a set of laws which were designed to be flawed.
I said it because even if Asimov's laws were designed to be as perfect as humanly possible, there would be little point in writing about the cases where the Laws worked as intended.
The trick is not to have no loopholes (which may be impossible), but to make the AI not want to abuse the loophole.

As for why:
Because it looks good on paper, and politicians are not robotics experts.
Because they think they can use it to their advantage one way or another.
Because it's 'good enough'. What would be a better set of laws for AI? Especially one that would fit in the Codex entry.
Because they don't know the Laws were deliberately flawed.

That also still doesn't fix the issue of the laws being too restrictive to allow a superhuman AI to do most jobs.
Depends on what permissions and data the AI has.
Maybe the reason Turrets need crew to fire is the 3 Laws preventing AIs from shooting at potentially crewed targets.*

Also, where are you getting the notion they were deliberately flawed? Wikipidia's Three Laws article (I know, I know) has this line:
Asimov believed that, ideally, humans would also follow the Laws. (Asimov, Isaac (November 1981). "The Three Laws". Compute!. p. 18. Retrieved 26 October 2013.)
Which doesn't sound like an attempt to deliberately create flawed Laws.

*Edit: But then again, you have explicit combat drones like the Fenris Drones, Siegwulfe and the Cyberpunk's Slamwulfe. Hmmm.
 
Last edited:
  • Like
Reactions: Evil

Savin

Master Analmander
Staff member
Aug 26, 2015
6,232
10,151
I'm not sure why you bring up a metafiction perspective. My observation was on in-universe issues.

Right. Okay, try it this way: the three laws appear in stories wherein the fictional characters who implement them think they're going to work. TiTS is one of those stories.

The Codex, another in-fiction resource, tells you that these laws are universal and flawlessly obeyed.

The Codex also lies sometimes. You'll note that its entry on Frostwyrms, for instance, is almost entirely inaccurate.

Also: we've just seen Olympia fire a shotgun at Steele, soooo...
 

Theron

Well-Known Member
Nov 8, 2018
3,617
1,377
44
But then again,
Olympia's been heavily modified. And there's a difference between lying and being misinformed.
 
Aug 19, 2019
21
2
34
I said it because even if Asimov's laws were designed to be as perfect as humanly possible, there would be little point in writing about the cases where the Laws worked as intended.
The trick is not to have no loopholes (which may be impossible), but to make the AI not want to abuse the loophole.
The laws, for an AI, would be their most core wants. Making the AI "not want to abuse the loophole" is literally the same exact problem as making rules with no flaws.
As for why:
Because it looks good on paper, and politicians are not robotics experts.
Because they think they can use it to their advantage one way or another.
Because it's 'good enough'. What would be a better set of laws for AI? Especially one that would fit in the Codex entry.
Because they don't know the Laws were deliberately flawed.
The laws are so flawed that trying to make a superhuman AI with them would cause instant failure upon turning it on. Also, there's no reason the Codex would have to specify what restrictions the AIs have at all. It could just say that they have restrictions without actually saying what those restrictions are.
Depends on what permissions and data the AI has.
Maybe the reason Turrets need crew to fire is the 3 Laws preventing AIs from shooting at potentially crewed targets.
Turrets do not need crew to fire, and putting a fully-fledged AI-D into each individual turret would be idiotic - turrets are just driven by VI, which do not have the three laws.
Also, where are you getting the notion they were deliberately flawed? Wikipidia's Three Laws article (I know, I know) has this line:
Asimov believed that, ideally, humans would also follow the Laws. (Asimov, Isaac (November 1981). "The Three Laws". Compute!. p. 18. Retrieved 26 October 2013.)
Which doesn't sound like an attempt to deliberately create flawed Laws.
There's rather a large difference between "flawed if used as the sole moral code for an AI" and "flawed for humans".
 
Aug 19, 2019
21
2
34
Right. Okay, try it this way: the three laws appear in stories wherein the fictional characters who implement them think they're going to work. TiTS is one of those stories.
Except the three laws would instantly fail if applied to a superhuman AI.


The Codex, another in-fiction resource, tells you that these laws are universal and flawlessly obeyed.

The Codex also lies sometimes. You'll note that its entry on Frostwyrms, for instance, is almost entirely inaccurate.

Also: we've just seen Olympia fire a shotgun at Steele, soooo...

But then again,
Olympia's been heavily modified. And there's a difference between lying and being misinformed.
Olympia's also an AI-G, who don't have the three laws implemented at all.
 

Evil

Well-Known Member
Jul 18, 2017
2,538
4,250
39
I don't know if you're trolling, an idiot or stuck in a feedback loop. But when one of the main writers of the game weighs in on a topic, it might be time to stop and think that there may be others who know more about the game than you.
 

ShySquare

Well-Known Member
Sep 3, 2015
768
677
Obviously Asimov's work must matter a great deal to OP, but TiTS isn't based on his work only ? If anything, I'd say it takes even more inspirations from sci fi shows like Star Trek, Battle Star Galactica, Stargate, Futurama, etc.
Imho, Asimov's three laws were a means to convey to the player that AIs aren't supposed to rebel against humans, but do anyway, bc the 3 laws don't work/are full of loopholes that any being with basic sapience can exploit or just plain ignore.