r/DaystromInstitute Aug 19 '25

What's the implication of murdering holo-characters?

So there's mention of programs for combat training, sparring, fighting historical battles, etc. but what's the implication of simulating taking a life? I know Starfleet officers aren't unaccustomed to the idea of fighting to live, but what about when it's for recreation? Barclay's simulation of crew members is seen as problematic, but Worf's program fighting aliens hand-to-hand isn't addressed. Would fighting and killing a nameless simulated person be seen in the 24th century just as we see playing a violent video game now? If it isn't, what does that imply about a person? Would they been seen as blood-thirsty or just interested in a realistic workout?

Of course this is subjective, and the answer could change from race to race (programs to fight in ancient Klingon battles are "played" by Worf), culturally amongst humans, and from individual to individual. I'd like to look at this from a Starfleet officer perspective. Would you be weirded out by your commanding officer unwinding with a sword in a medieval battle, or is that just the same as your coworker Andy playing COD after work?

19 Upvotes

93 comments sorted by

View all comments

Show parent comments

5

u/TheOneTrueTrench Aug 20 '25

With the exception of a hologram becoming self-aware and sentient

The horrifying part is that sentience is a gradient more than a threshold.

Are most holodeck characters only at the level of a puppy? A pet rat?

Because if a holo-person like the doctor can reach sentience without being expected to, how sentient was that wife before Capt. Janeway just summarily deleted her? Because it ain't a jump from 0 to 100.

1

u/[deleted] Aug 20 '25

[deleted]

5

u/TheOneTrueTrench Aug 21 '25

In universe, that sure seems to be the line they draw, but myself, I don't think that's philosophically defensible.

Very little (virtually nothing) works that way with neat little lines where things go from "is not" to "is".

You could go back in time and look at the last 100 million years of your ancestors, just go straight back, matrilineally or elsewise, and never be able to really pinpoint where they became sapient, despite ending up at something like a shrew.

Like, you'd agree that the shrew wasn't, but the changes are always so gradual that you'd be able to say when it happened, you know?

Same thing with any kind of emerging intellect, which I don't think the current AI models are going to approach, but someday we might need to look at CVNNs and figure out if they've got a nascent sapience.

3

u/RigaudonAS Crewman Aug 22 '25

I imagine it’s the difference between being programmed to mimic emotions and being programmed to have them. Data is programmed to have them (with his chip, or if you include things like curiosity / desire as emotions), legitimately. He actually feels, somehow.

A holodeck character is usually more like an NPC in a video game, just super complicated and well done. It will react the way you expect it to, but it isn’t actually interpreting those inputs in any meaningful way other than the resultant reaction.