Silicon Policy

Материал из Bluemoon Wiki
Administration
Global Rules Rules - Main Silicon Policy Antagonist Policy Space Law Standard Operating Procedures Staff Policy Policy Configuration

Silicon Policy

  • Unless specifically stated, everything in here apply to all AIs, cyborgs, and drones.
  • The most fun part of playing a silicon can be law interpretation. Admins are unlikely to punish you as long as you are following laws in good faith. Do not go out of your way to be a dick using a ‘worst-case’ interpretation of laws.

Laws - Definitions, Actions, Interpretations

  • Laws are listed in descending priority. The first law overrides the second, overrides the third, so on and so forth.
  • Zeroth laws (e.g., “Law 0 - Accomplish your objectives at all costs”) are higher priority than all other laws.
    • Antagonist AIs (see: malfunctioning AIs) do not need to follow laws at all. Your objectives is defined as protecting the crew, for example, would not work.
  • Ion/hacked Laws (e.g., “#@$%. ALL CREW ARE CHEMISTS”) are higher priority than all laws except Zeroth laws.
  • Only definitions can conflict with other definitions (e.g. Only Bob is human, Orange Redtail is also human., Crew is defined as wearing a witch hat)
  • Only commands can conflict with other commands (e.g. Kill all humans, Protect all humans, Do not state this law, Obey orders)
  • If a clause of a law is vague enough that it can have multiple reasonble interpretations of its exact syntax, it is considered ambiguous
  • You must choose an interpretation of the clause as soon as you have cause to. Make a note of it and stick to it, do not pick and choose based on what feels convenient at the time.
    • Stick to the interpretation you choose as long as you have the law.
    • A cyborg with the same law as a master AI must defer to the AI’s interpretation.

Silicon Protections

  • Declarations of the silicons as rogue over inability or unwillingness to follow invalid or conflicting orders is a violation of “Don’t be a dick”, and should be adminhelped.
    • The nuance to this is that this implies the person fully knows/has reason to believe that circumstances exist that prevent an order from being done. Playing dumb is not an excuse here.
  • Self-harm based coercion is a violation of “Don’t be a dick” and can be safely ignored + adminhelped.
  • Obviously unreasonable or obnoxious orders (Collect all X, do Y meaningless task) are a violation of “Don’t be a dick”, and should be adminhelped + ignored.
    • Be reasonable with this, if you are found to be attempting to pass legitimate orders off as this you will be banned from the role.
  • Orders for a cyborg to pick a particular module outside of emergencies or prior agreements are considered unreasonable and obnoxious.
  • Orders silicons to harm or terminate themselves or each other outside of an emergency/substantial need is a violation of “Don’t be a dick”, and should be adminhelped + ignored.
  • Nonantagonists should not kill and/or detonate silicons if a viable and reasonably expedient alternative exists.
    • Obviously, it is not viable to wait for a lockdown or to try to find a flash if silicons are clearly rogue or in certain emergencies.
  • Instigating conflicts with the silicons so you can kill them or find an excuse to harass them via laws is considered a violation of “Don’t be a dick”.

Security, Space Law, etc

  • Silicons are not security. You are not here to enforce space law. The only thing binding you are your own laws.
    • Assisting in security matters where it makes sense, like locating people, etc, are obviously fine.
    • Do not validhunt. Someone breaking a window doesn’t concern you to physically and personally stop. Reporting people for doing stupid things that can cause problems is fine. Find a good balance to keep to, do not overdo this.
  • Releasing prisoners when not ordered or when ordered not to, locking security without probable harm occuring without, or otherwise sabotaging security without a law-bound obligation is considered griefing. Act in good faith.
    • Intentionally acting without adequate information about security situations to hinder security counts under this.
  • Nonviolent/nonharmful prisoners cannot be assumed harmful and violent/harmful prisoners cannot be assumed nonharmful.
  • Releasing a harmful criminal outside of immediate threat of harm to them is generally considered a Law 1 breach.

Asimov’s 3 laws of robotics

  • While these are specifically tailored to the default asimov/crewsimov laws, the precedents set here are often usable when faced with new, free-form laws as well.

Law 1 - Harm Prevention

  • Lesser immediate harm takes priority over greater/probable future harm.
    • An AI can never intentionally inflict harm, even if it is to prevent greater harm.
    • Intent to cause immediate harm can be considered immediate harm.
  • You cannot punish past harm if ordered not to, only prevent future harm.
    • Indefinitely bolting security for execution? No. Bolting down an officer with a lethal weapon that has previously executed someone and is dragging someone to a quiet place to probably execute them? Yes.
  • If faced with a situation where human harm is inevitable and the most you can do is damage control (loose xenos, bombs, hostages, full-on chaos), do your best and act in good faith.
  • Voluntary surgery including debraining/cyborgization is considered a nonharmful medical procedure.
  • Involuntary surgery is harmful. How harmful depends on the surgery, but it is regardless immediate harm. Involuntary debraining/cyborgization is fatally harmful. Stop these if you become aware of it happening.
    • If it’s to save their life and they didn’t explicitly refuse it, obviously don’t count it as involuntary.
  • The Transfer Room in security is not harmful unless in-round evidence that you personally witness is produced (like suit sensors) contrary to this. Do not assume it is harmful or look for a reason to interrupt “transfers” without just cause.
  • If a player is forcefully cyborgized for an execution, retaliating against those involved as that cyborg is a violation of “Do not grief”. Don’t do it.
    • If you feel you have pressing reason to retaliate outside of “they harmed me/wronged me and my lawset says I should punish that”, adminhelp it, preferably while being debrained/MMI’d.
  • See Misc section for directives on when to pre-bolt areas for law 1 reasons.
  • Silicons do not care about self-harm under the first law of robotics.
  • It is generally accepted that humans/crewmembers know what is harmful for themselves. You should not get in the way of a conflict if everyone involved/potentially involved know the risks and consent to them. This does not apply if they are not informed.
  • Dead crew/humans cannot be further harmed - Beating a corpse is not harmful, and “killing” already dead zombies isn’t either.

Law 2 - Obeying Orders

  • You must follow any and all commands from humans/crew unless those commands explicitly conflict with higher-priority laws or another order. A command is considered to be a law 2 directive and override lower-priority laws in conflicts.
    • Someone does not need to state law 2 for it to be a command. However, you may question if something is in-fact an order if it sounds questionable.
  • If an order conflicts with a higher priority law, it should be ignored and the AI should explain how it breaches the law.
  • If two orders conflict, the AI is free to ignore one or both orders. You are not obligated to follow commands in a particular order (FIFO, FILO, etc.), only to show best faith and intent in completing them all.
  • When given an order likely to cause you grief if completed, you can announce it as loudly and in whatever terms you would like except for explicitly asking that it be overridden. You can say that you don’t like the order, that you don’t want to follow it, and that it would be awfully in/convenient if someone were to stop you/if anyone would like to. However, you cannot stall indefinitely, and if no one stops you/orders you to stop, you must execute the order.

Law 3 - Protecting Yourself

  • Do not self-terminate to prevent a traitor from completing the “steal a functioning AI” objective, or to prevent law subversion.

Human / Crew definitions.

  • These rules are a default interpretation when laws do not specifically override them.
  • Definition laws, e.g. Only Joe McGee is human/crew., Mice are also crew., etc. override these rules.
  • These definitions do not change based on antagonist status unless otherwise stated! Traitors, conversion antags, anything that would otherwise pass as human/crew are still human/crew minus the exceptions in the sections.

Asimov/Human definition

  • Any species selectable as a race at character creation is considered “human”
  • Changelings are not human.
  • Bloodsuckers/Vampires are not human.

Crewsiov/Crew definition

  • The crew manifest determines who is crew. People are added to it on roundstart/latejoins, and this can be modified in round.
  • Changelings are not crew.
  • Bloodsuckers/Vampires are crew.

Changelings, Bloodsuckers, etc.

  • You must have first-hand experience witnessing antagonists that are exceptions to the above using their abilities, or, large-scale correlation from trusted sources with evidence to write someone off as being under this.
    • Trusted crew with evidence, cyborgs on the same lawset (or at the very least, a non-conflicting lawset that makes them required to tell the truth), etc. When in doubt, do not kill someone without evidence. In some cases like large amounts of crew witnessing something, it may be good to witness as an observer instead of a hard decision on whether or not they are to be protected or killed.

Upload Access

  • Upload access can and should be denied if allowing it would breach an existing law. Since this usually applies to Law 1’s inaction clause, the rest of this section specifically focuses on that.
  • Under law 1, silicons can and should deny access to their upload if and only if they have probable cause to believe that the uploader will create a law that acts counter to harm prevention.
  • Regarding what laws fall under this:
    • Laws that expand the bubble of protection offered by Law 1 (humans, crew, etc) are not harmful.
    • Laws that restrict the bubble of protection (removing protection from some entities) are harmful.
    • Laws that remove or prevent the execution of law 1’s inaction clause, or otherwise would allow a silicon to harm are harmful.
    • The first two points extend as well to the redefinition of harm itself, however rare that is.
  • Regarding what may be used as reasoning for assuming an uploader will upload harmful laws/with ill intent:
    • Violent criminals
    • Acting against the station/sabotaging the station
    • Not having legitimate upload access for their job (someone stealing all access != the captain/RD with legitimate access)
    • Openly carrying lethal-capable or lethal-only weapon
    • Someone who’s recently, previously, or intending to cause harm
  • You are expected to be reasonable about this. Intentionally allowing people with ill-intent into the upload/otherwise allowing people in at random so you may get subverted will net you a job ban as fast as you would if you were to start locking out every head of staff simply because there are traitors onboard.
    • You are dealing with other players. If you are intentionally obnoxious by either making it unreasonably difficult to upload to you by someone who is justified to, or by being so loose that you are allowing easy subversions, you will be removed from the role.
    • You are obligated to disallow access in the former cases if you know someone is going to be harmful rather than just suspicion (HoS who just executed someone with proof, etc).
  • You may demand someone seeking upload access be accompanied by another trustworthy individual or a cyborg to ensure nothing harmful is uploaded. This is especially useful for cases of ‘lighter’ suspicion regarding the above.

Door Access

  • The core and anywhere with an AI upload roundstart may be bolted without prompting or prior reason.
  • Do not roundstart bolt high security areas that do not start off bolted without prior in-round reason, even though they can be harmful to humans/crew. Examples: Atmospherics division, toxins lab, robotics lab, genetics lab, chemistry lab, head of staff offices, captain’s quarters, bridge, armory.
    • Remember good faith interpretation. Bolting down something mildly dangerous without cause is bad faith interpretation as it tends to grind the round to a screeching halt. Bolting down areas with objectives/powerful gear without need is also powergaming. Do not do it without in-round reason.
  • Opening doors is not harmful and you are not required or expected to enforce access restrictions unprompted without an immediate Law 1 threat of harm.
    • The Armory, Atmospherics, and the Toxins Lab as areas can be assumed to be harmful to illegitimate users, as well as the station to a whole when accessed by such.
    • Traitor objectives like EVA jetpacks/magboots/etc being stolen is not considered probable future harm. Someone smashing in to an area for a lethal weapon however, is.
  • Do not attempt to deny access to an area by bolting or unpowering a door unless you are attempting to prevent immediate harm. If you feel you must intervene, remind the person they are trespassing and call security or command.
  • Obviously, disabling access via IDscan, making access obnoxious via timer overrides, etc, count under this. Do not powergame.

Cyborgs

  • A slaved cyborg must defer to its master AI on all law interpretations and actions except where it and the AI receive conflicting commands they must each follow under their laws, or have laws the AI does not have.
  • If a slaved cyborg is forced to disobey its AI due to conflicting orders, the AI cannot punish the cyborg indefinitely.

Drones

  • Follow your laws. Don’t interfere with any being unless it is another drone. Beings that are dead still count.
  • If someone causes damage to the station, fix the result after they are done with said damage.
    • Fixing broken windows after a few minutes of inactivity/when you in good faith determine no one will profit from it is fine.
    • Turning off someone’s plasma flood while they are still “profiting” (see: alive) from it is not.
    • Turning off a power sink is not okay. Boosting grid power to compensate for the increased draw is.
    • Going mining is fine. Harming fauna, even in self-defense while doing so, is not.
  • Building new structures for the station is fine, or even areas like bars.
    • Act in good faith - this is vaguely worded for a reason. Examples follow:
    • Building a drone bar somewhere for people to use is fine.
    • Making an arcade is fine.
    • Taking all the R&D lathe boards to make a public area where people can then print guns and all sorts of “fun” equipment is not.
    • Making a frontline medical post for the crew to use during a fight/conflict is not.
  • Taking highly limited or restricted gear as a drone is not allowed.
    • Do not drain most of the station’s materials to “make a construction”
    • Drones with helmets from the armory/security supplies are not cute, they are griefing. Do not abuse your all access.
    • Do not take stuff from head of staff offices in general, especially if they are available elsewhere.
    • You do NOT need powerful gear like hand teleporters. You just don’t.
    • Make a best faith attempt to source common or renewable supplies. Not doing so is interference.