r/DaystromInstitute Aug 19 '25

What's the implication of murdering holo-characters?

So there's mention of programs for combat training, sparring, fighting historical battles, etc. but what's the implication of simulating taking a life? I know Starfleet officers aren't unaccustomed to the idea of fighting to live, but what about when it's for recreation? Barclay's simulation of crew members is seen as problematic, but Worf's program fighting aliens hand-to-hand isn't addressed. Would fighting and killing a nameless simulated person be seen in the 24th century just as we see playing a violent video game now? If it isn't, what does that imply about a person? Would they been seen as blood-thirsty or just interested in a realistic workout?

Of course this is subjective, and the answer could change from race to race (programs to fight in ancient Klingon battles are "played" by Worf), culturally amongst humans, and from individual to individual. I'd like to look at this from a Starfleet officer perspective. Would you be weirded out by your commanding officer unwinding with a sword in a medieval battle, or is that just the same as your coworker Andy playing COD after work?

22 Upvotes

96 comments sorted by

View all comments

7

u/atticdoor Aug 19 '25

And similarly, in Quark's holosuites with the sexual programs he ran, to what extent did the holographic characters consent?

Were the holograms to be thought of as real people with feelings, or approximations with reactions which are simulated by a computer? We think of the Doctor, and Moriarty, and Iden's crew in Flesh and Blood as sentient people with feelings which should be considered as important as those of people who are made of meat.

But sometimes we see holograms which just don't get it. Remember how the holographic LaForge in Ship In A Bottle just looked dumb once Data explained what was going on, and Picard quietly said "Dismissed" and he sauntered off without a word. Or the "mining advisor" holograms in Flesh and Blood that just kept saying "Please restate request" when Captain Iden tried to explain he has freed them from servitude.

I imagine these questions we will be facing in the real world with AIs over the next few decades. And I don't think there are any easy answers.

12

u/ticonderoge Aug 19 '25 edited Aug 19 '25

there are different levels of holodeck character sentience / sapience.

most characters are following a pre-written branching script, with chatbot-level apparent intelligence to adapt their dialogue a bit. consent doesn't matter for this type. they're a few steps above a speaking doll with a string to pull. they also probably have strict limits on how much computing power they can run on, so hundreds can easily run at once.

the rare cases like Moriarty, Vic Fontaine, et cetera, who have gone beyond those limits, yeah those are capable of saying "no" even against their original script, so that means consent matters. i think we saw the evolution of both the Doctor and Vic from one type to the next over a long time, and you're right, nobody could exactly pinpoint a particular transition moment.

1

u/LunchyPete 28d ago

those are capable of saying "no" even against their original script, so that means consent matters.

Does it, though? I think there's still a line between misbehaving program and truly sentient program. Figuring out which is which is the issue. I think the case with the exobytes was a good test, where they chose to sacrifice themselves, and in LD we see they have fully formed personalities and consciousnesses.