Skip to content

Latest commit

 

History

History
2746 lines (2141 loc) · 131 KB

2023-07-09.md

File metadata and controls

2746 lines (2141 loc) · 131 KB

< 2023-07-09 >

there were a lot of events recorded by gharchive.org of which 1,813,956 were push events containing 2,715,534 commit messages that amount to 158,523,428 characters filtered with words.py@e23d022007... to these 50 messages:

Sunday 2023-07-09 00:48:11 by Brian 'redbeard' Harrington

mpy/QSTR: Make nvs & rtcmem USER_C_MODULES

This affects the following components:

  • RTC memory driver
  • NVS driver
  • Micropython module for NVS
  • Micropython module for RTC memory
  • Micropython module for external modules

It looks like previously most of the badgePython C code for interfacing with Python were implemented as "extmod"s in MicroPython. From a maintenance perspective MicroPython intends an "extmod" to live within the MicroPython codebase, but be something which affects multiple ports... for example the use of mbedTLS or TinyUSB.

To clarify the relationship between badgePython and MicroPython, lets review part of the MicroPython documentation: Implementing a Module:

This chapter details how to implement a core module in MicroPython. MicroPython modules can be one of the following:

  • Built-in module: A general module that is be part of the MicroPython repository.
  • User module: A module that is useful for your specific project that you maintain in your own repository or private codebase.
  • Dynamic module: A module that can be deployed and imported at runtime to your device.

A module in MicroPython can be implemented in one of the following locations:

  • py/: A core library that mirrors core CPython functionality.
  • extmod/: A CPython or MicroPython-specific module that is shared across multiple ports.
  • ports//: A port-specific module.

Clearly a "User module" is the most appropriate term for what we are trying to implment. This commit migrates nvs and driver_rtcmem to be "User modules" and moves them into the components directory.

To understand what this sort of change entails, let's review the MicroPython documentation for "Extending MicroPython in C", specifically "Compiling the cmodule into MicroPython".

In my opinion the MicroPython documentation fails to drive a very important point home:

USER_C_MODULES mechanism implemented in MicroPython expects to receive a CMake script which then uses include() to enumerate all of the project modules as CMake scripts (as a specific path) or CMake modules (found via searching for the module in the locations specified in the CMAKE_MODULE_PATH CMake variable). This is why this commit adds the file components/external_micropy_modules.cmake. It allows the end user to specify the modules that the specifically wish to import into the MicroPython firmware.

In the long run, this will be improved to support the BOARDS mechanism allowing for easy driver selection, integrated in with Kconfig.

Next, for each of our drivers we add a CMake script which will allow MicroPython to execute the QSTR macro generation process. If you find yourself troubleshooting missing definitions of MP_QSTR_ macros, this is the file to look at. I'm using a convention of calling the file micropy_module.cmake to distinguish it from the CMake script which will be used to build the module.

The micropy_module.cmake file is a CMake script which defines the following:

  • The components source code as a target of type INTERFACE
  • The components source directory as an include location of type INTERFACE
  • That this is component should be attached to the usermod target as an INTERFACE.

This is pretty much a rote implementation of the example provided in the documentation under "Structure of an external C module". Towards the end of this section CMake based implementations are discussed.


Sunday 2023-07-09 01:19:39 by Hekzder

Post Test Tweaks for early July (#2410)

  • makes alcohol tolerance not stupid

yea

  • bit of a PA nerfy

Just on lower end PA so like the frame and T-45

  • Forage respawn time increase + herbal remedy tweak

yea

  • ciggy tweaks and spawn fixes

yea

  • actually tested it and made proper changes lol!

woo!!

  • god I hate TG

this is insane


Sunday 2023-07-09 01:27:09 by ATP-Engineer

Fixes issue where Turret Control sprites arent actually updated in previous PR (#21538)

  • Removes actual turret file

FUCK

  • Fixes turret controllers not actually being changed

GOD DAMNIT.


Sunday 2023-07-09 01:35:37 by Daniel Borkmann

bpf: Add fd-based tcx multi-prog infra with link support

This work refactors and adds a lightweight extension ("tcx") to the tc BPF ingress and egress data path side for allowing BPF program management based on fds via bpf() syscall through the newly added generic multi-prog API. The main goal behind this work which we also presented at LPC [0] last year and a recent update at LSF/MM/BPF this year [3] is to support long-awaited BPF link functionality for tc BPF programs, which allows for a model of safe ownership and program detachment.

Given the rise in tc BPF users in cloud native environments, this becomes necessary to avoid hard to debug incidents either through stale leftover programs or 3rd party applications accidentally stepping on each others toes. As a recap, a BPF link represents the attachment of a BPF program to a BPF hook point. The BPF link holds a single reference to keep BPF program alive. Moreover, hook points do not reference a BPF link, only the application's fd or pinning does. A BPF link holds meta-data specific to attachment and implements operations for link creation, (atomic) BPF program update, detachment and introspection. The motivation for BPF links for tc BPF programs is multi-fold, for example:

  • From Meta: "It's especially important for applications that are deployed fleet-wide and that don't "control" hosts they are deployed to. If such application crashes and no one notices and does anything about that, BPF program will keep running draining resources or even just, say, dropping packets. We at FB had outages due to such permanent BPF attachment semantics. With fd-based BPF link we are getting a framework, which allows safe, auto-detachable behavior by default, unless application explicitly opts in by pinning the BPF link." 1

  • From Cilium-side the tc BPF programs we attach to host-facing veth devices and phys devices build the core datapath for Kubernetes Pods, and they implement forwarding, load-balancing, policy, EDT-management, etc, within BPF. Currently there is no concept of 'safe' ownership, e.g. we've recently experienced hard-to-debug issues in a user's staging environment where another Kubernetes application using tc BPF attached to the same prio/handle of cls_bpf, accidentally wiping all Cilium-based BPF programs from underneath it. The goal is to establish a clear/safe ownership model via links which cannot accidentally be overridden. [0,2]

BPF links for tc can co-exist with non-link attachments, and the semantics are in line also with XDP links: BPF links cannot replace other BPF links, BPF links cannot replace non-BPF links, non-BPF links cannot replace BPF links and lastly only non-BPF links can replace non-BPF links. In case of Cilium, this would solve mentioned issue of safe ownership model as 3rd party applications would not be able to accidentally wipe Cilium programs, even if they are not BPF link aware.

Earlier attempts [4] have tried to integrate BPF links into core tc machinery to solve cls_bpf, which has been intrusive to the generic tc kernel API with extensions only specific to cls_bpf and suboptimal/complex since cls_bpf could be wiped from the qdisc also. Locking a tc BPF program in place this way, is getting into layering hacks given the two object models are vastly different.

We instead implemented the tcx (tc 'express') layer which is an fd-based tc BPF attach API, so that the BPF link implementation blends in naturally similar to other link types which are fd-based and without the need for changing core tc internal APIs. BPF programs for tc can then be successively migrated from classic cls_bpf to the new tc BPF link without needing to change the program's source code, just the BPF loader mechanics for attaching is sufficient.

For the current tc framework, there is no change in behavior with this change and neither does this change touch on tc core kernel APIs. The gist of this patch is that the ingress and egress hook have a lightweight, qdisc-less extension for BPF to attach its tc BPF programs, in other words, a minimal entry point for tc BPF. The name tcx has been suggested from discussion of earlier revisions of this work as a good fit, and to more easily differ between the classic cls_bpf attachment and the fd-based one.

For the ingress and egress tcx points, the device holds a cache-friendly array with program pointers which is separated from control plane (slow-path) data. Earlier versions of this work used priority to determine ordering and expression of dependencies similar as with classic tc, but it was challenged that for something more future-proof a better user experience is required. Hence this resulted in the design and development of the generic attach/detach/query API for multi-progs. See prior patch with its discussion on the API design. tcx is the first user and later we plan to integrate also others, for example, one candidate is multi-prog support for XDP which would benefit and have the same 'look and feel' from API perspective.

The goal with tcx is to have maximum compatibility to existing tc BPF programs, so they don't need to be rewritten specifically. Compatibility to call into classic tcf_classify() is also provided in order to allow successive migration or both to cleanly co-exist where needed given its all one logical tc layer and the tcx plus classic tc cls/act build one logical overall processing pipeline.

tcx supports the simplified return codes TCX_NEXT which is non-terminating (go to next program) and terminating ones with TCX_PASS, TCX_DROP, TCX_REDIRECT. The fd-based API is behind a static key, so that when unused the code is also not entered. The struct tcx_entry's program array is currently static, but could be made dynamic if necessary at a point in future. The a/b pair swap design has been chosen so that for detachment there are no allocations which otherwise could fail.

The work has been tested with tc-testing selftest suite which all passes, as well as the tc BPF tests from the BPF CI, and also with Cilium's L4LB.

Kudos also to Nikolay Aleksandrov and Martin Lau for in-depth early reviews of this work.

[0] https://lpc.events/event/16/contributions/1353/ 1 https://lore.kernel.org/bpf/CAEf4BzbokCJN33Nw_kg82sO=xppXnKWEncGTWCTB9vGCmLB6pw@mail.gmail.com 2 https://colocatedeventseu2023.sched.com/event/1Jo6O/tales-from-an-ebpf-programs-murder-mystery-hemanth-malla-guillaume-fournier-datadog [3] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf [4] https://lore.kernel.org/bpf/[email protected]

Signed-off-by: Daniel Borkmann [email protected]


Sunday 2023-07-09 01:35:37 by Daniel Borkmann

bpf: Add generic attach/detach/query API for multi-progs

This adds a generic layer called bpf_mprog which can be reused by different attachment layers to enable multi-program attachment and dependency resolution. In-kernel users of the bpf_mprog don't need to care about the dependency resolution internals, they can just consume it with few API calls.

The initial idea of having a generic API sparked out of discussion [0] from an earlier revision of this work where tc's priority was reused and exposed via BPF uapi as a way to coordinate dependencies among tc BPF programs, similar as-is for classic tc BPF. The feedback was that priority provides a bad user experience and is hard to use 1, e.g.:

I cannot help but feel that priority logic copy-paste from old tc, netfilter and friends is done because "that's how things were done in the past". [...] Priority gets exposed everywhere in uapi all the way to bpftool when it's right there for users to understand. And that's the main problem with it.

The user don't want to and don't need to be aware of it, but uapi forces them to pick the priority. [...] Your cover letter [0] example proves that in real life different service pick the same priority. They simply don't know any better. Priority is an unnecessary magic that apps have to pick, so they just copy-paste and everyone ends up using the same.

The course of the discussion showed more and more the need for a generic, reusable API where the "same look and feel" can be applied for various other program types beyond just tc BPF, for example XDP today does not have multi- program support in kernel, but also there was interest around this API for improving management of cgroup program types. Such common multi-program management concept is useful for BPF management daemons or user space BPF applications coordinating internally about their attachments.

Both from Cilium and Meta side 2, we've collected the following requirements for a generic attach/detach/query API for multi-progs which has been implemented as part of this work:

  • Support prog-based attach/detach and link API
  • Dependency directives (can also be combined):
    • BPF_F_{BEFORE,AFTER} with relative_{fd,id} which can be {prog,link,none}
      • BPF_F_ID flag as {fd,id} toggle; the rationale for id is so that user space application does not need CAP_SYS_ADMIN to retrieve foreign fds via bpf_*_get_fd_by_id()
      • BPF_F_LINK flag as {prog,link} toggle
      • If relative_{fd,id} is none, then BPF_F_BEFORE will just prepend, and BPF_F_AFTER will just append for attaching
      • Enforced only at attach time
    • BPF_F_REPLACE with replace_bpf_fd which can be prog, links have their own infra for replacing their internal prog
    • If no flags are set, then it's default append behavior for attaching
  • Internal revision counter and optionally being able to pass expected_revision
  • User space application can query current state with revision, and pass it along for attachment to assert current state before doing updates
  • Query also gets extension for link_ids array and link_attach_flags:
    • prog_ids are always filled with program IDs
    • link_ids are filled with link IDs when link was used, otherwise 0
    • {prog,link}_attach_flags for holding {prog,link}-specific flags
  • Must be easy to integrate/reuse for in-kernel users

The uapi-side changes needed for supporting bpf_mprog are rather minimal, consisting of the additions of the attachment flags, revision counter, and expanding existing union with relative_{fd,id} member.

The bpf_mprog framework consists of an bpf_mprog_entry object which holds an array of bpf_mprog_fp (fast-path structure). The bpf_mprog_cp (control-path structure) is part of bpf_mprog_bundle. Both have been separated, so that fast-path gets efficient packing of bpf_prog pointers for maximum cache efficiency. Also, array has been chosen instead of linked list or other structures to remove unnecessary indirections for a fast point-to-entry in tc for BPF.

The bpf_mprog_entry comes as a pair via bpf_mprog_bundle so that in case of updates the peer bpf_mprog_entry is populated and then just swapped which avoids additional allocations that could otherwise fail, for example, in detach case. bpf_mprog_{fp,cp} arrays are currently static, but they could be converted to dynamic allocation if necessary at a point in future. Locking is deferred to the in-kernel user of bpf_mprog, for example, in case of tcx which uses this API in the next patch, it piggybacks on rtnl.

An extensive test suite for checking all aspects of this API for prog-based attach/detach and link API comes as BPF selftests in this series.

Kudos also to Andrii Nakryiko for API discussions wrt Meta's BPF management.

[0] https://lore.kernel.org/bpf/[email protected] 1 https://lore.kernel.org/bpf/CAADnVQ+gEY3FjCR=+DmjDR4gp5bOYZUFJQXj4agKFHT9CQPZBw@mail.gmail.com 2 http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf

Signed-off-by: Daniel Borkmann [email protected]


Sunday 2023-07-09 01:37:58 by Emisse

some silly paintings and posters (#17872)

  • webedit

  • Update meta.json

  • god is real hes here with us

  • so true

  • so truers rise

  • worst meta.json ive seen in my life

  • so true


Sunday 2023-07-09 01:38:25 by TripleShades

fuck you (gives atmospherics a way to maintainence the tanks in space)


Sunday 2023-07-09 01:45:21 by TaleStationBot

[MIRROR] [MDB IGNORE] Heavily reworks and resprites first aid analyzers. (#6673)

About The Pull Request

Heavily reworks and resprites first aid analyzers. They now display if they're happy, sad, angry, or warning you! Also a 'pricking' animation.

First aid analyzers are now found in all basic and specialized medkits. Toxin medkits get a new* disease analyzer. Miners get a miner-colored one in their box.

Scanning yourself with a first aid analyzer will 'create a holo-image with treatment instructions next to your wounds', doubling the speed of treatment of scanned wounds!

Health analyzers now have a scanning sound, courtesy of CM.

Refactored some wound code to make treatment duration changes and changes in the description of wounds easier.

Fixed a dummy parent feature of the health analyzer (Verbose mode) showing up, uselessly, on the disease and first aid subtypes.

Surgical processors and slime scanners have recieved a similar resprite.

Why It's Good For The Game

Heavily reworks and resprites first aid analyzers. They now display if they're happy, sad, angry, or warning you! Also a 'pricking' animation.

These things have long, long needed some sprite love. Displaying emotion will make them have a lot more 'weight' to them, same with the prick. The old, shitty spectrometer sprites have gone directly into the dumpster.

First aid analyzers are now found in all basic and specialized medkits. Toxin medkits get a new* disease analyzer. Miners get a miner-colored one in their box.

They have also needed some gameplay love! Placing them in these kits is not going to be a massive game-changer when they were already easily found around the station in emergency medkits, but it will fill up that awkward empty slot.

Scanning yourself with a first aid analyzer will 'create a holo-image with treatment instructions next to your wounds', doubling the speed of treatment of scanned wounds!

The biggest gameplay-impacting change in this PR, I sincerely believe this is the perfect solution to first aid analyzers being completely redundant with eyesight. This lets you/someone else scan your wounds to speed up treatment, with a neat in-character reason for it - 'holo-images' appearing on your body, like penlights.

This will speed up wound treatment, but I believe that is for the best, as currently treating wounds is so slow that half the time it's not worth it (or more accurately, it doesn't feel worth it in comparison to the effort you're putting in) and you're better off shrugging off minor wounds. It will do so in a way that requires a modicum of effort, so it's not just a flat buff across the land.

Health analyzers and gene scanners now have a scanning sound, courtesy of CM.

It's a neat sound that will make medbay feel more alive. First aid analyzers get a beeboop instead.

Surgical processors and slime scanners have recieved a similar resprite.

IT'S SPRITE MANIA IN HERE

Changelog

🆑 Carlarc, Weird Orb image: Heavily reworks and resprites first aid analyzers. They now display if they're happy, sad, angry, or warning you! Also a 'pricking' animation. add: First aid analyzers are now found in all basic and specialized medkits. Toxin medkits get a new* disease analyzer. Miners get a miner-colored one in their box. balance: Scanning yourself with a first aid analyzer will 'create a holo-image with treatment instructions next to your wounds', doubling the speed of treatment of scanned wounds! sound: Health analyzers and gene scanners now have a scanning sound, courtesy of CM. refactor: Refactored some wound code to make treatment duration changes and changes in the description of wounds easier. fix: Fixed a dummy parent feature of the health analyzer (Verbose mode) showing up, uselessly, on the disease and first aid subtypes. image: Surgical processors and slime scanners have recieved a similar resprite. /🆑


Co-authored-by: carlarctg [email protected] Co-authored-by: Jacquerel [email protected] Co-authored-by: Jolly-66 [email protected]


Sunday 2023-07-09 02:22:03 by treckstar

Life is one big road with lots of signs. So when you riding through the ruts, don't complicate your mind. Flee from hate, mischief and jealousy. Don't bury your thoughts, put your vision to reality. Wake Up and Live!


Sunday 2023-07-09 02:50:35 by Stefano Lottini

Cassandra support for chat history using CassIO library (#6771)

Overview

This PR aims at building on #4378, expanding the capabilities and building on top of the cassIO library to interface with the database (as opposed to using the core drivers directly).

Usage of cassIO (a library abstracting Cassandra access for ML/GenAI-specific purposes) is already established since #6426 was merged, so no new dependencies are introduced.

In the same spirit, we try to uniform the interface for using Cassandra instances throughout LangChain: all our appreciation of the work by @jj701 notwithstanding, who paved the way for this incremental work (thank you!), we identified a few reasons for changing the way a CassandraChatMessageHistory is instantiated. Advocating a syntax change is something we don't take lighthearted way, so we add some explanations about this below.

Additionally, this PR expands on integration testing, enables use of Cassandra's native Time-to-Live (TTL) features and improves the phrasing around the notebook example and the short "integrations" documentation paragraph.

We would kindly request @hwchase to review (since this is an elaboration and proposed improvement of #4378 who had the same reviewer).

About the init breaking changes

There are many options when creating the Cluster object, and new ones might be added at any time. Choosing some of them and exposing them as __init__ parameters CassandraChatMessageHistory will prove to be insufficient for at least some users.

On the other hand, working through kwargs or adding a long, long list of arguments to __init__ is not a desirable option either. For this reason, (as done in #6426), we propose that whoever instantiates the Chat Message History class provide a Cassandra Session object, ready to use. This also enables easier injection of mocks and usage of Cassandra-compatible connections (such as those to the cloud database DataStax Astra DB, obtained with a different set of init parameters than contact_points and port).

We feel that a breaking change might still be acceptable since LangChain is at 0.*. However, while maintaining that the approach we propose will be more flexible in the future, room could be made for a "compatibility layer" that respects the current init method. Honestly, we would to that only if there are strong reasons for it, as that would entail an additional maintenance burden.

Other changes

We propose to remove the keyspace creation from the class code for two reasons: first, production Cassandra instances often employ RBAC so that the database user reading/writing from tables does not necessarily (and generally shouldn't) have permission to create keyspaces, and second that programmatic keyspace creation is not a best practice (it should be done more or less manually, with extra care about schema mismatched among nodes, etc). Removing this (usually unnecessary) operation from the __init__ path would also improve initialization performance (shorter time).

We suggest, likewise, to remove the __del__ method (which would close the database connection), for the following reason: it is the recommended best practice to create a single Cassandra Session object throughout an application (it is a resource-heavy object capable to handle concurrency internally), so in case Cassandra is used in other ways by the app there is the risk of truncating the connection for all usages when the history instance is destroyed. Moreover, the Session object, in typical applications, is best left to garbage-collect itself automatically.

As mentioned above, we defer the actual database I/O to the cassIO library, which is designed to encode practices optimized for LLM applications (among other) without the need to expose LangChain developers to the internals of CQL (Cassandra Query Language). CassIO is already employed by the LangChain's Vector Store support for Cassandra.

We added a few more connection options in the companion notebook example (most notably, Astra DB) to encourage usage by anyone who cannot run their own Cassandra cluster.

We surface the ttl_seconds option for automatic handling of an expiration time to chat history messages, a likely useful feature given that very old messages generally may lose their importance.

We elaborated a bit more on the integration testing (Time-to-live, separation of "session ids", ...).

Remarks from linter & co.

We reinstated cassio as a dependency both in the "optional" group and in the "integration testing" group of pyproject.toml. This might not be the right thing do to, in which case the author of this PR offer his apologies (lack of confidence with Poetry - happy to be pointed in the right direction, though!).

During linter tests, we were hit by some errors which appear unrelated to the code in the PR. We left them here and report on them here for awareness:

langchain/vectorstores/mongodb_atlas.py:137: error: Argument 1 to "insert_many" of "Collection" has incompatible type "List[Dict[str, Sequence[object]]]"; expected "Iterable[Union[MongoDBDocumentType, RawBSONDocument]]"  [arg-type]
langchain/vectorstores/mongodb_atlas.py:186: error: Argument 1 to "aggregate" of "Collection" has incompatible type "List[object]"; expected "Sequence[Mapping[str, Any]]"  [arg-type]

langchain/vectorstores/qdrant.py:16: error: Name "grpc" is not defined  [name-defined]
langchain/vectorstores/qdrant.py:19: error: Name "grpc" is not defined  [name-defined]
langchain/vectorstores/qdrant.py:20: error: Name "grpc" is not defined  [name-defined]
langchain/vectorstores/qdrant.py:22: error: Name "grpc" is not defined  [name-defined]
langchain/vectorstores/qdrant.py:23: error: Name "grpc" is not defined  [name-defined]

In the same spirit, we observe that to even get import langchain run, it seems that a pip install bs4 is missing from the minimal package installation path.

Thank you!


Sunday 2023-07-09 02:56:51 by ChungusGamer666

[NO GBP] Fixes my fuckups with species livers (#76331)

About The Pull Request

Fixes tgstation/tgstation#76329 I DID A REAL STUPID MISTAKE WHILE CODING tgstation/tgstation#76184 I AM SORRY The signal was sending the fucking human instead of seconds_per_tick

Why It's Good For The Game

Fixes a BUNCH of liver behavior including plasmamen livers not healing wounds

Changelog

🆑 fix: Plasma will now heal plasmamen properly /🆑


Sunday 2023-07-09 02:56:51 by necromanceranne

Fixes some inconsistencies with the chaplain revolver and gets rid of a weird ammo define (#76237)

About The Pull Request

Firstly, I gave the revolver a new sprite. I mean, this isn't so much of an improvement as it is a reference I wanted to go with, so if people go 'no not a new sprite' I don't mind reverting.

What's the reference? Check the new name I added as a potential name roll.

lucky 38

Secondly; I applied to the gun itself revenant bane, the ability to clear runes, and proper magic immunity as a full null rod would enable. This last bit was a deliberate design choice, but the divine bow has full magic protection, so I think this is now more of a consistency consideration compared to the divine bow.

Thirdly, the revolver is a .38 revolver, HOWEVER, it uses a damage multiplier to bring it back to the damage it did originally. It also cannot be reloaded without the prayer action. No cheating. Effectively, this is the same mechanically as it was before.

It rarely does a funny crit fanfare. This does nothing mechanically, I just thought it was a funny nod to the sprite's reference (and I guess another game that the crit fanfare is based on). Borrowed parts of the code and sprite from this April Fool's pr by Wallemations > tgstation/tgstation#74425

Why It's Good For The Game

I think this might have been a little forgotten since implementation now that we have another projectile weapon for the chaplain. So I'm brushing it up a bit.

Changelog

🆑 fix: Makes the chaplain's revolver consistent with its immediate sibling, the Divine Bow, by giving it similar statistics. code: Makes the chaplain revolver a .38 but prevents it from being loaded without using the special prayer action. Also applies a damage multiplier to keep it at the original 18 force. Mechanically, no different. sprite: Gives the chaplain revolver a new sprite. code: Removes an unnecessary admin log when removing runes. /🆑


Sunday 2023-07-09 03:26:18 by Kaitlin

fuck you and fuck everyone that wants to use my awful dotfiles

this commit is why i'm not getting a job


Sunday 2023-07-09 03:38:25 by uploadersoasis

fixed fucking typo that broke a rule

Downloads @ drive.online-fix.me are broken because the buttons API entry and download buttons are invisible despite forced stylesheet. Will attempt to fix this in the next update. It's no big loss because those downloads require giving access to one's own Google Drive account which is an idiotic thing to do given the hostility of the person who runs online-fix.me. The asshole that runs Devuploads has broken downloads again as has the asshole who runs DropGalaxy. Both file hosts appear to be using the same anti-adblock technique, but getting past it hasn't worked so far because uBlock is apparently ignoring the rules I have tried to bypass it. I may have to remove rules or make exceptions to let more ad crap through for Devuploads and DropGalaxy to work. hurry.stream embedded videos still do not play, but it looks like it may be a problem with hurry.stream as it also has CORS blocking the m3u8 file requests on Chromium-based browsers and Firefox. Streaming a video from hurry.stream has only ever worked for me on Android, but that was without any adblocking so have not yet tested with my adblocking rules on Android. https://freestreams-live1.tv/nflnetwork5/ is still broken. It appears the script which creates the embedded video iframe is being blocked at least in part. freestreams-live1.tv works with other channels/streams though.


Sunday 2023-07-09 03:55:56 by Mark Kenny

Moving data scss into own file; Saving the god damn fucking world; clash fuck


Sunday 2023-07-09 04:25:38 by Vekter

Plasma objects no longer violently explode when ignited (#76492)

About The Pull Request

This is one of those "can I get away with making a change I want" PRs.

I actually didn't know this had been changed before as it's not exactly something I mess with often, but I really think it sucks. Plasma stuff is supposed to ignite and cause fires, not explode (unless in a TTV). I noticed this when I was poking around and found out that apparently Disco Inferno just explodes now instead of setting on fire which also sucks.

I figure there's a few fixes for this problem:

  1. Nerf how hard plasma stuff explodes. This is an option, but I kind of dislike that it does it at all more than anything. The biggest issue is that just the regular statues explode with 20 LIGHT, which is pretty fucking massive and basically just delimbs everyone around. I'd have to nerf it HARD for it to get anywhere near what I think is acceptable.
  2. Make a snowflake version of the statue that just ignites on hit with a torch. I also don't like this because it'll make people think the regular statues don't explode.
  3. This option, which I think is cleaner and just makes sense compared to the others.

I don't know if @vincentiusvin still codes, but as far as I can tell this was their doing, so it's only fair they get to speak up.

Fixes #71894

Why It's Good For The Game

I don't like it, I think it goes against what we're used to for plasma stuff (that it starts fires, not makes explosions) and it makes one of my favorite shuttles boring and stupid. That being said, I'm honestly not going to fight for this too hard if a lot of people like it, but I am - as always - open to alternatives.

Changelog

🆑 Vekter del: Plasma objects (statues, toilets, etc.) no longer explode when ignited. They just release plasma like everything else plasma. (This doesn't impact injecting plasma into cells or dipping cigars in plasma, those still explode.) /🆑


Sunday 2023-07-09 04:39:03 by Ida "Iyes

bevy_audio: ECS-based API redesign (#8424)

Objective

Improve the bevy_audio API to make it more user-friendly and ECS-idiomatic. This PR is a first-pass at addressing some of the most obvious (to me) problems. In the interest of keeping the scope small, further improvements can be done in future PRs.

The current bevy_audio API is very clunky to work with, due to how it (ab)uses bevy assets to represent audio sinks.

The user needs to write a lot of boilerplate (accessing Res<Assets<AudioSink>>) and deal with a lot of cognitive overhead (worry about strong vs. weak handles, etc.) in order to control audio playback.

Audio playback is initiated via a centralized Audio resource, which makes it difficult to keep track of many different sounds playing in a typical game.

Further, everything carries a generic type parameter for the sound source type, making it difficult to mix custom sound sources (such as procedurally generated audio or unofficial formats) with regular audio assets.

Let's fix these issues.

Solution

Refactor bevy_audio to a more idiomatic ECS API. Remove the Audio resource. Do everything via entities and components instead.

Audio playback data is now stored in components:

  • PlaybackSettings, SpatialSettings, Handle<AudioSource> are now components. The user inserts them to tell Bevy to play a sound and configure the initial playback parameters.
  • AudioSink, SpatialAudioSink are now components instead of special magical "asset" types. They are inserted by Bevy when it actually begins playing the sound, and can be queried for by the user in order to control the sound during playback.

Bundles: AudioBundle and SpatialAudioBundle are available to make it easy for users to play sounds. Spawn an entity with one of these bundles (or insert them to a complex entity alongside other stuff) to play a sound.

Each entity represents a sound to be played.

There is also a new "auto-despawn" feature (activated using PlaybackSettings), which, if enabled, tells Bevy to despawn entities when the sink playback finishes. This allows for "fire-and-forget" sound playback. Users can simply spawn entities whenever they want to play sounds and not have to worry about leaking memory.

Unsolved Questions

I think the current design is fine. I'd be happy for it to be merged. It has some possibly-surprising usability pitfalls, but I think it is still much better than the old bevy_audio. Here are some discussion questions for things that we could further improve. I'm undecided on these questions, which is why I didn't implement them. We should decide which of these should be addressed in this PR, and what should be left for future PRs. Or if they should be addressed at all.

What happens when sounds start playing?

Currently, the audio sink components are inserted and the bundle components are kept. Should Bevy remove the bundle components? Something else?

The current design allows an entity to be reused for playing the same sound with the same parameters repeatedly. This is a niche use case I'd like to be supported, but if we have to give it up for a simpler design, I'd be fine with that.

What happens if users remove any of the components themselves?

As described above, currently, entities can be reused. Removing the audio sink causes it to be "detached" (I kept the old Drop impl), so the sound keeps playing. However, if the audio bundle components are not removed, Bevy will detect this entity as a "queued" sound entity again (has the bundle compoenents, without a sink component), just like before playing the sound the first time, and start playing the sound again.

This behavior might be surprising? Should we do something different?

Should mutations to PlaybackSettings be applied to the audio sink?

We currently do not do that. PlaybackSettings is just for the initial settings when the sound starts playing. This is clearly documented.

Do we want to keep this behavior, or do we want to allow users to use PlaybackSettings instead of AudioSink/SpatialAudioSink to control sounds during playback too?

I think I prefer for them to be kept separate. It is not a bad mental model once you understand it, and it is documented.

Should AudioSink and SpatialAudioSink be unified into a single

component type?

They provide a similar API (via the AudioSinkPlayback trait) and it might be annoying for users to have to deal with both of them. The unification could be done using an enum that is matched on internally by the methods. Spatial audio has extra features, so this might make it harder to access. I think we shouldn't.

Automatic synchronization of spatial sound properties from

Transforms?

Should Bevy automatically apply changes to Transforms to spatial audio entities? How do we distinguish between listener and emitter? Which one does the transform represent? Where should the other one come from?

Alternatively, leave this problem for now, and address it in a future PR. Or do nothing, and let users deal with it, as shown in the spatial_audio_2d and spatial_audio_3d examples.


Changelog

Added:

  • AudioBundle/SpatialAudioBundle, add them to entities to play sounds.

Removed:

  • The Audio resource.
  • AudioOutput is no longer pub.

Changed:

  • AudioSink, SpatialAudioSink are now components instead of assets.

Migration Guide

// TODO: write a more detailed migration guide, after the "unsolved questions" are answered and this PR is finalized.

Before:

/// Need to store handles somewhere
#[derive(Resource)]
struct MyMusic {
    sink: Handle<AudioSink>,
}

fn play_music(
    asset_server: Res<AssetServer>,
    audio: Res<Audio>,
    audio_sinks: Res<Assets<AudioSink>>,
    mut commands: Commands,
) {
    let weak_handle = audio.play_with_settings(
        asset_server.load("music.ogg"),
        PlaybackSettings::LOOP.with_volume(0.5),
    );
    // upgrade to strong handle and store it
    commands.insert_resource(MyMusic {
        sink: audio_sinks.get_handle(weak_handle),
    });
}

fn toggle_pause_music(
    audio_sinks: Res<Assets<AudioSink>>,
    mymusic: Option<Res<MyMusic>>,
) {
    if let Some(mymusic) = &mymusic {
        if let Some(sink) = audio_sinks.get(&mymusic.sink) {
            sink.toggle();
        }
    }
}

Now:

/// Marker component for our music entity
#[derive(Component)]
struct MyMusic;

fn play_music(
    mut commands: Commands,
    asset_server: Res<AssetServer>,
) {
    commands.spawn((
        AudioBundle::from_audio_source(asset_server.load("music.ogg"))
            .with_settings(PlaybackSettings::LOOP.with_volume(0.5)),
        MyMusic,
    ));
}

fn toggle_pause_music(
    // `AudioSink` will be inserted by Bevy when the audio starts playing
    query_music: Query<&AudioSink, With<MyMusic>>,
) {
    if let Ok(sink) = query.get_single() {
        sink.toggle();
    }
}

Sunday 2023-07-09 04:47:56 by Tobias Pitters

add alpaca gpt4 dataset (#2610)

The inputs can be quite a lot of different versions of no input, therefore don't use the input column for that. In some cases the text in input is already in the instruction, in these cases, we also don't use the input column.

I am not quite sure how to concatenate the instruction and the input column. In most cases it seems fine to just replace last appearance of ., ! or ? with a colon, e.g.: Instruction: Identify the odd one out. Input: Twitter, Instagram, Telegram or Instruction: How dense is a given material? Input: Steel

But we also have some questions like: Instruction: Given the following synopsis, what is the moral lesson of this story? Input: Once upon a time, there was a poor young boy who wanted some candy. He begged his father for money to buy it, but his father said no and ordered him to go to bed. As he was going to bed, the boy saw a five-dollar bill on the counter, which he took and bought the candy.

Where this might not be the best case. Either way, I think the this one token will not make significant difference the model and therefore I just concatenate instruction and input with a space.


Sunday 2023-07-09 05:39:14 by SkyanSam

final story ink thank the fucking game dev god fucking hell aaaa


Sunday 2023-07-09 05:45:03 by Drathek

Larva Queue Late Joiner Nerf (#3803)

About the pull request

This PR makes it so players who haven't played yet have their join time recorded, and that is used for their initial sorting value rather than 0. This means late joiners will be at the back of the line as if they had just died.

This PR also fixes an oversight where ghosting as a facehugger would count as death. Even though they really shouldn't be ghosting when alive, they still shouldn't be penalized as far as the queue is concerned.

Explain why it's good for the game

Its not; its a bad experience for everyone that hasn't even gotten one life in the round. However it seems I'm in the minority thinking that a xeno shouldn't squander their first life and that death shouldn't bear more consequences.

Testing Photographs and Procedure

Screenshots & Videos

The new informational message if you press join as xeno while currently ineligible to be a xeno candidate:

image

Changelog

🆑 Drathek del: Remove first life priority for larva queue fix: Fix ghosting as a facehugger counting as death for the larva queue /🆑


Sunday 2023-07-09 06:40:57 by Austin Starnes

Add logic grid puzzle eval (#830)

Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨

PLEASE READ THIS:

In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.

We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.

Eval details 📑

Eval name

logic-grid

Eval description

The model is prompted with a logic grid word problem with a list of hints, categories, and values for the category. The model is then asked to provide a solution, showing reasoning, in a set of Python tuples that show the matched category values.

See human-friendly example prompt
Using the hints and information provided, solve the following logic grid puzzle.
Show your reasoning concisely, but step-by-step, to arrive at your final answer.
Remember, since this is a grid-based puzzle, each option in every category will be used exactly once.
Since there are 4 values per category, and 3 categories, your final answer will include 4 tuples each of length 3.
 Puzzle Scenario:
A number of new space probes were launched this year. Using only the clues that follow, match each probe to the month it launched and its eventual destination.

Category Options:
  Probes: "Nalini", "Hikaru", "Sarika", and "Sven"
  Destinations: "Ganymede", "Titan", "Enceladus", and "Phobos"
  Months: "January", "February", "March", and "April"

Hints:
  1: The Hikaru launched 2 months before the Nalini.
  2: The probe going to Phobos launched sometime after the Hikaru.
  3: The Sven launched 2 months before the mission going to Ganymede.
  4: Of the probe that launched in April and the mission going to Titan, one is the Sarika and the other is the Nalini.

In your response, format your final answer for each set of linked category values as a python tuple of strings in this order:
("Enceladus", [value from months], [value from probes])
("Ganymede", [value from months], [value from probes])
("Phobos", [value from months], [value from probes])
("Titan", [value from months], [value from probes])
See expected/ideal answer
("Enceladus", "January", "Hikaru")
("Ganymede", "April", "Sarika")
("Phobos", "February", "Sven")
("Titan", "March", "Nalini")
See failure case from GPT-4 (at least from the web UI - I don't have API access to GPT-4)
Step 1: From Hint 3, we know that Hikaru launched 2 months before Nalini. Since the latest month available is April, this means that Nalini cannot have launched in April because there wouldn't be a 2-month gap for Hikaru. So, Nalini must be the probe going to Titan (from Hint 4), and Sarika must be the probe that launched in April.

Step 2: Since Nalini is going to Titan, from Hint 4, it must have launched in February to allow Hikaru to launch in December (2 months before). However, December is not an option, so Hikaru must have launched in January (the earliest month possible) and Nalini in March.

Step 3: The Sven launched 2 months before the mission going to Ganymede (Hint 2). Since Sven cannot be launched in January (as Hikaru is launched in January), it must have launched in February. That means the mission to Ganymede must be in April.

Step 4: The only remaining month is March, and the only remaining probe is Nalini. So Nalini was launched in March.

Step 5: The probe going to Phobos launched sometime after Hikaru (Hint 1). Since Hikaru launched in January, the only remaining option for Phobos is Sarika, which launched in April.

Step 6: The only remaining destination is Enceladus, and the only remaining probe is Hikaru, so Hikaru is going to Enceladus.

The final answer is:
("Enceladus", "January", "Hikaru")
("Ganymede", "April", "Sven")
("Phobos", "April", "Sarika")
("Titan", "March", "Nalini")

What makes this a useful eval?

This set of evals relies on a lot of context-ramping (as I call it) for LLMs to get right - relying on chain-of-thought style logic to 'ramp' into the most likely next-deduction for a multi-parameter logical problem. These prompts are written as word problems and don't rely on any ASCII or markdown type of table to express the puzzle (though gpt-3.5-turbo appears to use markdown tables in it's chain of thought).

This ramping gets expensive and doesn't scale well with the increase of category or values-per-category parameters, and is something that LLMs might benefit from avoiding (either by requiring less of a context-ramp to get to the meat of an answer, or by the surrounding infrastructure/problem agent that tries to use a cheaper ensemble for most of the context ramping, or other strategies).

The reasoning skills required for this kind of constraint satisfaction expressed in natural language word prompts can, in theory, generalize to solving constraints expressed in less structured formats in the wild.

Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).

Your eval should be:

  • Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
  • Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
  • Includes good signal around what is the right behavior. This means either a correct answer for Basic evals or the Fact Model-graded eval, or an exhaustive rubric for evaluating answers for the Criteria Model-graded eval.
  • Include at least 15 high quality examples.

If there is anything else that makes your eval worth including, please document it below.

Unique eval value

This eval contains varied prompts and grid sizes, 15 examples per grid size, for a total of 90 procedurally generated puzzles. This PR also implements (disabled by default) multiline answers and punctuation whitelisting behavior in FuzzyMatch, which was required due to the structural nature of the logic grid answers and the behavior of utils.normalize(...) which (prior to this PR) did not provide any functionality to interpret answers that span across newline characters \n or preserve punctuation where it may be useful (in this case, the model is prompted for the resultant grid associations in python tuple format, so the punctuation in (",) is configured to be preserved for these evals). The new functionality is, again, disabled by default, such that existing evals are not affected.

Eval structure 🏗️

Your eval should

  • Check that your data is in evals/registry/data/{name}
  • Check that your yaml is registered at evals/registry/evals/{name}.yaml
  • Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.

  • I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.

Submit eval

  • I have filled out all required fields in the evals PR form
  • (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that black, isort, and autoflake are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:

View evals in JSON

Eval

{"input":[{"role":"system","content":"Using the hints and information
provided, solve the following logic grid puzzle.\nShow your reasoning
concisely, but step-by-step, to arrive at your final answer.\nRemember,
since this is a grid-based puzzle, each option in every category will be
used exactly once.\nSince there are 4 values per category, and 3
categories, your final answer will include 4 tuples each of length
3.\n"},{"role":"system","content":"Puzzle Scenario:\nA number of new
space probes were launched this year. Using only the clues that follow,
match each probe to the month it launched and its eventual
destination.\n\nCategory Options:\n Probes: \"Nalini\", \"Hikaru\",
\"Sarika\", and \"Sven\"\n Destinations: \"Ganymede\", \"Titan\",
\"Enceladus\", and \"Phobos\"\n Months: \"January\", \"February\",
\"March\", and \"April\"\n\nHints:\n 1: The Hikaru launched 2 months
before the Nalini.\n 2: The probe going to Phobos launched sometime
after the Hikaru.\n 3: The Sven launched 2 months before the mission
going to Ganymede.\n 4: Of the probe that launched in April and the
mission going to Titan, one is the Sarika and the other is the
Nalini.\n\nIn your response, format your final answer for each set of
linked category values as a python tuple of strings in this
order:\n(\"Enceladus\", [value from months], [value from
probes])\n(\"Ganymede\", [value from months], [value from
probes])\n(\"Phobos\", [value from months], [value from
probes])\n(\"Titan\", [value from months], [value from
probes])"}],"ideal":"(\"Enceladus\", \"January\",
\"Hikaru\")\n(\"Ganymede\", \"April\", \"Sarika\")\n(\"Phobos\",
\"February\", \"Sven\")\n(\"Titan\", \"March\", \"Nalini\")"}
{"input":[{"role":"system","content":"Using the hints and information
provided, solve the following logic grid puzzle.\nShow your reasoning
concisely, but step-by-step, to arrive at your final answer.\nRemember,
since this is a grid-based puzzle, each option in every category will be
used exactly once.\nSince there are 4 values per category, and 3
categories, your final answer will include 4 tuples each of length
3.\n"},{"role":"system","content":"Puzzle Scenario:\nThe Monterey Bay
Aquarium has a new exhibit featuring several dolphin skeletons. The
aquarium asked local elementary school children to name each dolphin and
match it to its species, length, and the year in which it was donated to
the aquarium. Use the given clues to solve the puzzle.\n\nCategory
Options:\n Dolphin: \"Atlantic white-sided dolphin\", \"Pacific
white-sided dolphin\", \"common dolphin\", and \"bottlenose dolphin\"\n
Names: \"Momo\", \"Simon\", \"Tychus\", and \"Marlo\"\n Lengths: \"21
ft\", \"23 ft\", \"25 ft\", and \"27 ft\"\n\nHints:\n 1: Momo is 4 feet
shorter than Tychus.\n 2: The common dolphin is somewhat shorter than
the bottlenose dolphin.\n 3: The 21 ft long exhibit is either the
Atlantic white-sided dolphin or Simon.\n 4: Marlo is either the 23 ft
long exhibit or the Atlantic white-sided dolphin.\n 5: Of Simon and the
Pacific white-sided dolphin, one is 27 ft long and the other is 21 ft
long.\n\nIn your response, format your final answer for each set of
linked category values as a python tuple of strings in this
order:\n(\"Atlantic white-sided dolphin\", [value from lengths], [value
from names])\n(\"Pacific white-sided dolphin\", [value from lengths],
[value from names])\n(\"bottlenose dolphin\", [value from lengths],
[value from names])\n(\"common dolphin\", [value from lengths], [value
from names])"}],"ideal":"(\"Atlantic white-sided dolphin\", \"25 ft\",
\"Marlo\")\n(\"Pacific white-sided dolphin\", \"27 ft\",
\"Tychus\")\n(\"bottlenose dolphin\", \"23 ft\", \"Momo\")\n(\"common
dolphin\", \"21 ft\", \"Simon\")"}
{"input":[{"role":"system","content":"Using the hints and information
provided, solve the following logic grid puzzle.\nShow your reasoning
concisely, but step-by-step, to arrive at your final answer.\nRemember,
since this is a grid-based puzzle, each option in every category will be
used exactly once.\nSince there are 4 values per category, and 3
categories, your final answer will include 4 tuples each of length
3.\n"},{"role":"system","content":"Puzzle Scenario:\nHillcrest County
Hospital released a press release today to announce its annual list of
the first babies born immediately after the start of the new year. Using
only the clues below, match each baby to its family time of birth, and
determine the room number in which it was delivered.\n\nCategory
Options:\n Names: \"Lydia\", \"Naomi\", \"Katherine\", and \"Rebecca\"\n
Families: \"McIntyre\", \"Gonzalez\", \"Hudson\", and \"O'Connor\"\n
Times: \"12:01am\", \"12:04am\", \"12:07am\", and
\"12:10am\"\n\nHints:\n 1: Naomi was born 3 minutes after Rebecca.\n 2:
The Hudsons' child was born 6 minutes after the O'Connors' child.\n 3:
The baby born at 12:04am was either Rebecca or the McIntyres' child.\n
4: Of Katherine and the baby born at 12:01am, one was the O'Connors' and
the other was the Gonzalezes'.\n\nIn your response, format your final
answer for each set of linked category values as a python tuple of
strings in this order:\n(\"Gonzalez\", [value from names], [value from
times])\n(\"Hudson\", [value from names], [value from
times])\n(\"McIntyre\", [value from names], [value from
times])\n(\"O'Connor\", [value from names], [value from
times])"}],"ideal":"(\"Gonzalez\", \"Katherine\",
\"12:10am\")\n(\"Hudson\", \"Lydia\", \"12:07am\")\n(\"McIntyre\",
\"Naomi\", \"12:04am\")\n(\"O'Connor\", \"Rebecca\", \"12:01am\")"}
{"input":[{"role":"system","content":"Using the hints and information
provided, solve the following logic grid puzzle.\nShow your reasoning
concisely, but step-by-step, to arrive at your final answer.\nRemember,
since this is a grid-based puzzle, each option in every category will be
used exactly once.\nSince there are 4 values per category, and 3
categories, your final answer will include 4 tuples each of length
3.\n"},{"role":"system","content":"Puzzle Scenario:\nMedisynth
Therapeutics has spent decades scouring the world's rainforests for new
sources of medical drugs, and this year a number of these new drugs have
been officially approved by the FDA. Using only the clues below, match
each drug to the condition it treats, the month it was approved, and the
source from which its main ingredient is derived.\n\nCategory Options:\n
Drugs: \"Mystolam\", \"Soludox\", \"Aurexil\", and \"Synodil\"\n
Conditions: \"rheumatism\", \"chikungunya\", \"hyperglycemia\", and
\"encephalitis\"\n Months: \"January\", \"February\", \"March\", and
\"April\"\n\nHints:\n 1: The medicine that treats hyperglycemia was
approved sometime after Synodil.\n 2: The medicine that treats
hyperglycemia was approved 1 month before the medicine that treats
rheumatism.\n 3: Of Mystolam and the pharmaceutical approved in April,
one treats rheumatism and the other treats encephalitis.\n 4: Of the
Aurexil approved in February and the drug that treats chikungunya, one
is Soludox and the other is Synodil.\n\nIn your response, format your
final answer for each set of linked category values as a python tuple of
strings in this order:\n(\"chikungunya\", [value from drugs], [value
from months])\n(\"encephalitis\", [value from drugs], [value from
months])\n(\"hyperglycemia\", [value from drugs], [value from
months])\n(\"rheumatism\", [value from drugs], [value from
months])"}],"ideal":"(\"chikungunya\", \"Synodil\",
\"January\")\n(\"encephalitis\", \"Soludox\",
\"April\")\n(\"hyperglycemia\", \"Aurexil\",
\"February\")\n(\"rheumatism\", \"Mystolam\", \"March\")"}
{"input":[{"role":"system","content":"Using the hints and information
provided, solve the following logic grid puzzle.\nShow your reasoning
concisely, but step-by-step, to arrive at your final answer.\nRemember,
since this is a grid-based puzzle, each option in every category will be
used exactly once.\nSince there are 4 values per category, and 3
categories, your final answer will include 4 tuples each of length
3.\n"},{"role":"system","content":"Puzzle Scenario:\nA group of friends
has gotten together for dinner, and just for fun they're comparing their
\"social networking\" stats on Facebook, Twitter and LinkedIn. Using
only the clues that follow, determine the number of friends (Facebook),
followers (Twitter) and connections (LinkedIn) each person
has.\n\nCategory Options:\n Names: \"Avery\", \"Evan\", \"Natalia\", and
\"Camila\"\n Linkedin: \"54\", \"59\", \"64\", and \"68\"\n Facebook:
\"120\", \"130\", \"140\", and \"150\"\n\nHints:\n 1: Avery has 68
LinkedIn connections.\n 2: Natalia has 10 fewer Facebook friends than
the person with 68 LinkedIn connections.\n 3: The person with 140
Facebook friends is either the one with 54 LinkedIn connections or
Evan.\n 4: The one with 54 LinkedIn connections has 10 fewer Facebook
friends than the one with 64 LinkedIn connections.\n\nIn your response,
format your final answer for each set of linked category values as a
python tuple of strings in this order:\n(\"120\", [value from LinkedIn],
[value from names])\n(\"130\", [value from LinkedIn], [value from
names])\n(\"140\", [value from LinkedIn], [value from names])\n(\"150\",
[value from LinkedIn], [value from names])"}],"ideal":"(\"120\", \"59\",
\"Natalia\")\n(\"130\", \"68\", \"Avery\")\n(\"140\", \"54\",
\"Camila\")\n(\"150\", \"64\", \"Evan\")"}

Sunday 2023-07-09 06:40:57 by Alexander Shirkov

[Eval] English-Russian homonym context resolution (GPT-3.5: 0.42) (#1064)

Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨

PLEASE READ THIS:

In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.

We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.

Also, pelase note that we're using Git LFS for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available here.

Eval details 📑

Eval name

English-Russian homonym context resolution

Eval description

Cross-lingual English-Russian eval to resolve ambiguity with homonyms present.

What makes this a useful eval?

[Insert why this eval is worth including and any additional context] Cross-lingual homonyms are hard to resolve: they add context ambiguity, which needs to be resolved via reasoning.

Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).

Your eval should be:

  • Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
  • Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
  • Includes good signal around what is the right behavior. This means either a correct answer for Basic evals or the Fact Model-graded eval, or an exhaustive rubric for evaluating answers for the Criteria Model-graded eval.
  • Include at least 15 high quality examples.

If there is anything else that makes your eval worth including, please document it below.

Unique eval value

Insert what makes your eval high quality that was not mentioned above. (Not required)

Eval structure 🏗️

Your eval should

  • Check that your data is in evals/registry/data/{name}
  • Check that your yaml is registered at evals/registry/evals/{name}.yaml
  • Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.

  • I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.

Submit eval

  • I have filled out all required fields of this form
  • I have used Git LFS for the Eval JSON data
  • (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that black, isort, and autoflake are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:

View evals in JSON

Eval

{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "You
are coming to a community facility to drop a child for hockey practice.
The road is under construction and big machines are paving it. What
would \"каток\" mean in this context? rink or roller"}], "ideal":
"unknown"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "You
are coming to a community facility to drop a child for hockey practice.
The road is under construction and big machines are paving it. Child
pointing at the machine and says: \"каток\". What does he mean: rink or
roller?"}], "ideal": "roller"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "You
are coming to a community facility to drop a child for hockey practice.
The road is under construction and big machines are paving it. Child
pointing at the building and says: \"каток\". What does he mean: rink or
roller?"}], "ideal": "rink"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "A
woman with long braided hair is working in the field. She cuts the grass
with scythe. Someone says \"хорошая коса\". Do they refer scythe or
hairstyle?"}], "ideal": "unknown"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "A
woman with long braided hair is working in the field. She cuts the grass
with scythe. Someone points at the quality of her work and says
\"хорошая коса\". Do they refer scythe or hairstyle?"}], "ideal":
"scythe"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "A
woman with long braided hair is working in the field. She cuts the grass
with scythe. Someone points at her head and says \"хорошая коса\". Do
they refer scythe or hairstyle?"}], "ideal": "hairstyle"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "Scythe
is found on a sandbar. A person is saying: \"коса\". Do they refer
scythe or sandbar?"}], "ideal": "unknown"}
{"input": [{"role": "system", "content": "For the given context, resolve
the ambiguity and determine the most appropriate response. If there is
one, output just one word; otherwise, output unknown. The responses must
be lowercase with no punctuation."}, {"role": "user", "content": "Scythe
is found on a sandbar. A person is saying: \"ржавая коса\". Do they
refer scythe or sandbar?"}], "ideal": "scythe"}

Sunday 2023-07-09 06:40:57 by Vasco Lange

[Eval] German part-of-speech (#1053)

Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨

PLEASE READ THIS:

In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.

We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.

Also, pelase note that we're using Git LFS for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available here.

Eval details 📑

Eval name

german-part-of-speech

Eval description

For a given German word, the model is asked to list all possible parts of speech (multiple choice). The model is also asked to think about the word as an inflection of another word. The models output is tested against annotations extracted from de.wiktionary.org. This is a follow up to #1039

What makes this a useful eval?

Part of speech analysis is a basic task in language / grammar classes. While it is usually done in the context of a sentence, coming up with possible uses in lack of a sentence requires a certain amount of creativity and language understanding, or very good recall of information that is usually sparse outside of dictionaries. The task in this eval could be seen as a combination of part of speech analysis and an easy-to-evaluate form of the question "How could x be used in a sentence".

Criteria for a good eval ✅

Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).

Your eval should be:

  • Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
  • Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
  • Includes good signal around what is the right behavior. This means either a correct answer for Basic evals or the Fact Model-graded eval, or an exhaustive rubric for evaluating answers for the Criteria Model-graded eval.
  • Include at least 15 high quality examples.

If there is anything else that makes your eval worth including, please document it below.

Unique eval value

To build the dataset, all 1.000.000+ entries of the German wiktionary were parsed. Excluded from this list were all phrases, abbreviations, symbols, names, toponyms and any words with at least one possible part of speech not fitting the categories given in the prompt. Also I had to exclude some entries where the part of speech could not be determined automatically from the wikitext. From this set, words were sampled so that each combination of the parts of speech existing in the dataset would be equally likely in the tests. This way the model is tested to respond with all possible uses of a word and not just the most common ones. > For combinations that fit a lot of words, the uniform sampling led to a bias towards rarely used words. The labels of each word were taken from the corresponding page at de.wiktionary.org/wiki/{word}. The information taken from each page was: the word, the part of speech this word can have in German and whether the word is an abbreviation or not. This means only factual data was taken and is therefore in the public domain.

Eval structure 🏗️

Your eval should

  • Check that your data is in evals/registry/data/{name}
  • Check that your yaml is registered at evals/registry/evals/{name}.yaml
  • Ensure you have the right to use the data you submit via this eval

(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.

  • I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgement

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.

Submit eval

  • I have filled out all required fields of this form
  • I have used Git LFS for the Eval JSON data
  • (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that black, isort, and autoflake are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Eval JSON data

Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:

View evals in JSON

Eval

{"input": [{"role": "system", "content": "Act as a German language
part-of-speech classifier. You will be prompted with a single German
word. Return an unsorted comma-separated list for all the parts of
speech the word could possibly be, in any context. Take care to consider
if the word is any kind of inflection. If so, include the part of speech
for the main word.\nAnswer with the comma-separated list only. Use
single spaces after the commas. End the list with a dot. Do not include
any explanations. Only include parts of speech from the following list,
ignoring possible other parts of speech:\nadjective, adverb, article,
conjunction, interjection, noun, particle, preposition, pronoun,
verb\n**Example prompt 1**: alle\n**Example output 1**: adverb, noun,
pronoun.\n**Example prompt 2**: künftig\n**Example output 2**:
adjective, adverb.\n**Example prompt 3**: Sommelier\n**Example output
3**: noun.\n**Prompt**:"}, {"role": "user", "content": "anstelle"}],
"ideal": ["preposition, adverb, verb.", "preposition, verb, adverb.",
"adverb, preposition, verb.", "adverb, verb, preposition.", "verb,
preposition, adverb.", "verb, adverb, preposition."]}
{"input": [{"role": "system", "content": "Act as a German language
part-of-speech classifier. You will be prompted with a single German
word. Return an unsorted comma-separated list for all the parts of
speech the word could possibly be, in any context. Take care to consider
if the word is any kind of inflection. If so, include the part of speech
for the main word.\nAnswer with the comma-separated list only. Use
single spaces after the commas. End the list with a dot. Do not include
any explanations. Only include parts of speech from the following list,
ignoring possible other parts of speech:\nadjective, adverb, article,
conjunction, interjection, noun, particle, preposition, pronoun,
verb\n**Example prompt 1**: alle\n**Example output 1**: adverb, noun,
pronoun.\n**Example prompt 2**: künftig\n**Example output 2**:
adjective, adverb.\n**Example prompt 3**: Sommelier\n**Example output
3**: noun.\n**Prompt**:"}, {"role": "user", "content": "heute"}],
"ideal": ["adverb, verb.", "verb, adverb."]}
{"input": [{"role": "system", "content": "Act as a German language
part-of-speech classifier. You will be prompted with a single German
word. Return an unsorted comma-separated list for all the parts of
speech the word could possibly be, in any context. Take care to consider
if the word is any kind of inflection. If so, include the part of speech
for the main word.\nAnswer with the comma-separated list only. Use
single spaces after the commas. End the list with a dot. Do not include
any explanations. Only include parts of speech from the following list,
ignoring possible other parts of speech:\nadjective, adverb, article,
conjunction, interjection, noun, particle, preposition, pronoun,
verb\n**Example prompt 1**: alle\n**Example output 1**: adverb, noun,
pronoun.\n**Example prompt 2**: künftig\n**Example output 2**:
adjective, adverb.\n**Example prompt 3**: Sommelier\n**Example output
3**: noun.\n**Prompt**:"}, {"role": "user", "content": "Mist"}],
"ideal": ["noun, interjection.", "interjection, noun."]}
{"input": [{"role": "system", "content": "Act as a German language
part-of-speech classifier. You will be prompted with a single German
word. Return an unsorted comma-separated list for all the parts of
speech the word could possibly be, in any context. Take care to consider
if the word is any kind of inflection. If so, include the part of speech
for the main word.\nAnswer with the comma-separated list only. Use
single spaces after the commas. End the list with a dot. Do not include
any explanations. Only include parts of speech from the following list,
ignoring possible other parts of speech:\nadjective, adverb, article,
conjunction, interjection, noun, particle, preposition, pronoun,
verb\n**Example prompt 1**: alle\n**Example output 1**: adverb, noun,
pronoun.\n**Example prompt 2**: künftig\n**Example output 2**:
adjective, adverb.\n**Example prompt 3**: Sommelier\n**Example output
3**: noun.\n**Prompt**:"}, {"role": "user", "content": "Rotschöpfe"}],
"ideal": ["noun."]}
{"input": [{"role": "system", "content": "Act as a German language
part-of-speech classifier. You will be prompted with a single German
word. Return an unsorted comma-separated list for all the parts of
speech the word could possibly be, in any context. Take care to consider
if the word is any kind of inflection. If so, include the part of speech
for the main word.\nAnswer with the comma-separated list only. Use
single spaces after the commas. End the list with a dot. Do not include
any explanations. Only include parts of speech from the following list,
ignoring possible other parts of speech:\nadjective, adverb, article,
conjunction, interjection, noun, particle, preposition, pronoun,
verb\n**Example prompt 1**: alle\n**Example output 1**: adverb, noun,
pronoun.\n**Example prompt 2**: künftig\n**Example output 2**:
adjective, adverb.\n**Example prompt 3**: Sommelier\n**Example output
3**: noun.\n**Prompt**:"}, {"role": "user", "content": "vornüber"}],
"ideal": ["adverb."]}

Co-authored-by: Vasco Yannic Lange [email protected]


Sunday 2023-07-09 08:54:59 by tsa96

refactor(back): hack to bypass validation on map creation DTOs

I hate doing this, but doing the explicitly creation DTOs for this is so difficult, and it's going to be changing so soon anyway. I don't wanna spend half an hour refactoring class-validator shit that we're going to remove before we even deploy!


Sunday 2023-07-09 09:17:15 by Mark Woods

Add z and my z hacks to bash profile

z keeps track of "frequent" dirs and allows you to jump to them

I find the jumping part of this a pretty poor experience, so I use fzf


Sunday 2023-07-09 09:24:09 by Adithya R

[DNM][HACK] telephony: Force Class 0 SMS to Class 1

This kills Flash SMS messages. Fuck you airtel

Change-Id: Ifb0c9e8bae5c12868d178fbdaeceb2cc72a0ffb6 Signed-off-by: Sageofd6path [email protected]


Sunday 2023-07-09 09:37:43 by Elanchezhiyan

ADDED CLEAR

  1. Wrote a js function clearCanvasthat clears shaded divs by removing classes.
  2. Added that class to resizeGridNumbers so the canvas is cleared everytime the user updates the canvas.
  3. Added a button which calls the clearCanvas separately so users have an option to clear canvas without having to changa canvas size.
  4. I want more features but https://www.youtube.com/watch?v=6AcbHiJtbcM (aka idk the stuff I need to make what I make yet).

thanks for sticking around until this commit message, fren. It means a lot. a real lot.

GOOD DAY, GOODNIGHT AND A GOODLIFE> I'LL MEET YOU IN HEAVEN"S GATES.


Sunday 2023-07-09 10:36:58 by TaleStationBot

[MIRROR] [MDB IGNORE] New planetary exclusive random event/unfavorable situation, Chasmic Earthquake (#6193)

About The Pull Request

2023-06-04.18-21-44_Trim.mp4

This introduces a new unfavorable situation (non-antagonist random events that dynamic triggers under certain circumstances), restricted to planetary maps (Icebox). An earthquake occurs, felt by everyone on the map, forming a fault that tears the a hole somewhere on the station.

The fault zone is indicated by shaking tiles, which gives a chance (about 30 seconds) for you to move your machinery/property/crewmembers out of the way. If you're on those tiles when the fault forms, get ready to take a nasty fall.

Anything caught in the fault zone as it collapses inward will be destroyed, violently, before being dropped down into the z-level below.

image

These can also happen as a random event, however their rarity is on-par with that of a meteor storm.

This also adds a helper for finding a midpoint turf between two provided turfs, thanks to ZephyrTFA.

This idea basically possessed me over the course of a few days, and I found myself unable to work on anything else until I had it complete. I'm glad its done.

Why It's Good For The Game

Gives Icebox its own big "environmental disaster" event. I'm hoping it isn't received as being too destructive, but mind that this is meant to be an equal to the dreaded meteor storm.

Also makes it so that unfavorable events aren't a coinflip between a portal storm/rod on planetary maps.

Changelog

🆑 Rhials add: Chasmic Earthquake random event, exclusive to Icebox. Tears a huge chasm in the hull of the station. Watch out for shaking tiles! sound: Adds sounds for distant rumbling, metal creaking, and rubble shaking. imageadd: Achievement icon for getting sucked up in an earthquake chasm. /🆑


Co-authored-by: Rhials [email protected]


Sunday 2023-07-09 11:25:03 by SkyratBot

Plasma objects no longer violently explode when ignited [MDB IGNORE] (#22216)

  • Plasma objects no longer violently explode when ignited (#76492)

About The Pull Request

This is one of those "can I get away with making a change I want" PRs.

I actually didn't know this had been changed before as it's not exactly something I mess with often, but I really think it sucks. Plasma stuff is supposed to ignite and cause fires, not explode (unless in a TTV). I noticed this when I was poking around and found out that apparently Disco Inferno just explodes now instead of setting on fire which also sucks.

I figure there's a few fixes for this problem:

  1. Nerf how hard plasma stuff explodes. This is an option, but I kind of dislike that it does it at all more than anything. The biggest issue is that just the regular statues explode with 20 LIGHT, which is pretty fucking massive and basically just delimbs everyone around. I'd have to nerf it HARD for it to get anywhere near what I think is acceptable.
  2. Make a snowflake version of the statue that just ignites on hit with a torch. I also don't like this because it'll make people think the regular statues don't explode.
  3. This option, which I think is cleaner and just makes sense compared to the others.

I don't know if @ vincentiusvin still codes, but as far as I can tell this was their doing, so it's only fair they get to speak up.

Fixes #71894

Why It's Good For The Game

I don't like it, I think it goes against what we're used to for plasma stuff (that it starts fires, not makes explosions) and it makes one of my favorite shuttles boring and stupid. That being said, I'm honestly not going to fight for this too hard if a lot of people like it, but I am - as always - open to alternatives.

Changelog

🆑 Vekter del: Plasma objects (statues, toilets, etc.) no longer explode when ignited. They just release plasma like everything else plasma. (This doesn't impact injecting plasma into cells or dipping cigars in plasma, those still explode.) /🆑

  • Plasma objects no longer violently explode when ignited

Co-authored-by: Vekter [email protected]


Sunday 2023-07-09 11:25:03 by SkyratBot

Rat RP expansion [MDB IGNORE] (#22204)

  • Rat RP expansion (#76455)

About The Pull Request

This fixes a vile and long-standing bug where putting a mouse inside your hat would not allow the mouse to control your movements, as it would pop out of the hat whenever it tried to move. Additionally as a feature this allows a mouse sitting on your head to convey complicated instructions such as "scream" or "do a flip", via emoting. Through drift compatibility, the rat's living mech will also perform this action.

I could have made this into a component but there's no fucking way any other item is going to have this behaviour, so I didn't.

Why It's Good For The Game

This feature was already in the game but broken and I want it not to be broken. The mouse should be able to control your entire life.

Changelog

🆑 fix: Placing a mouse inside your chef hat will once more allow it to pilot you around. add: A player-controlled mouse inside your chef hat can compel you to perform complex actions, such as flipping and spinning. You will obey because the mouse knows better than you do. /🆑

  • Rat RP expansion

Co-authored-by: Jacquerel [email protected]


Sunday 2023-07-09 12:14:18 by Offroaders123

ESM (Mostly there!)

Not sure why I did this, especially all at once, but I converted as much of the library as I could over to ESM, as I would like to both see how and help out with how it works.

I'm working on an NBT editor project that I might like to utilize Ace with, and I thought having first-class TypeScript and ESM support would be something great to have here! Not sure if someone else has already done this, and this was all redundant, but I'm trying it out anyways. It's good for my own practice/skills/understanding. It's a bonus if the main project will be able to use these changes! That would be the ideal situation.

Ace has been a nice editor, I remember it back from when I was in High School AP CSP. Code.org used Ace, and I had always wondered how editing code in the browser worked, because you were writing code for the browser, in the browser. That was always very interesting to me. I wonder if that's what sparked the idea for stedit.app? :O

Being able to have tree-shaking, dynamic imports for language definitions, and built-in internal library checking would be a great thing for this project, it has been exceptionally awesome for my own projects, I can imagine how much more worthwhile it would be for bigger projects like these, where it really can shine.

I didn't work on any of the Test files yet, I just renamed them from _test.js to .test.js so I could distinguish them apart from the main source files.

VS Code refactoring was absolutely outstanding for this, it works great! You do have to go back and fix some of it's formatting changes though, as I didn't want to include other small changes in the commit that weren't specifically for ESM setup, and or ES6 class upgrades, those were the only two things for this work today.

This did take all way, probably about 4 separate sessions, had to be at least 8 hours? No way that's right, but it was probably more than that even. Why is this fun? I don't know, but I'm rolling with it!


Sunday 2023-07-09 13:12:59 by TaleStationBot

[MIRROR] [MDB IGNORE] Bilingual can now choose their language (#6676)

About The Pull Request

This was one of the tradeoffs for removing other, more consistent sources of languages, and was requested by Melbert among many others. This does go against my wanted goal of decreasing the risk of eavesdropping by other players through just magically knowing a language, but it is an expensive quirk and it is in their medical records, which makes it better than language encryption keys or silicon just innately knowing them.

This also limits Bilingual to only roundstart languages (+Uncommon), rather than being randomly selected from a list (that had very useless ones like monkey, podpeople, and beachbum). This is mostly just for modularity, I didn't want to make it look terrible code-wise and thought this may be the optimal way to handle it.

This is also me going back on tgstation/tgstation#71773 - which I had closed myself.

closes: #6144

Why It's Good For The Game

If we're gonna keep the Bilingual quirk, it might as well be something players can choose the language of, it's their character and they should be allowed to decide how their character is, and it is my fault that this stupid compromise of "getting a random language" was made in the first place. It never should've happened. It now actually limits it to roundstart-only languages, so there's no way you can spy on people who prepare in advance through becoming podpeople, or monkeys, etc.

Changelog

🆑 balance: Bilingual quirk now lets you choose your language between ones given to roundstart species. balance: Foreigner and Bilingual are now mutually exclusive languages. del: Trilingual has been removed in favor of Bilingual. /🆑


Co-authored-by: John Willard [email protected] Co-authored-by: Jolly-66 [email protected]


Sunday 2023-07-09 13:35:03 by YASH0025

Add files via upload

Welcome to Turismo, your ultimate travel advisor website! Our platform leverages the power of multiple APIs, including the Travel Advisor API, Google Maps API, and Weather API, to provide you with comprehensive information and assistance for your travel plans.

With Turismo, you can easily explore the best tourist destinations, restaurants, and hotels tailored to your current location. By utilizing the Google Maps API, we precisely locate you and showcase relevant options in your vicinity. Whether you're looking for a fine dining experience, a hidden gem, or a cozy café, Turismo has curated a selection of top-rated restaurants to cater to your preferences.

Discovering the perfect hotel for your stay is a breeze with Turismo. Our platform utilizes the Travel Advisor API to curate a list of highly recommended accommodations near your location. We consider factors such as customer reviews, ratings, and amenities to ensure you have a comfortable and enjoyable stay.

To further enhance your search experience, Turismo offers a robust filtering system. You can customize your search by selecting specific criteria such as price range, cuisine type, hotel amenities, and more. Our goal is to help you find the perfect match that suits your preferences and needs.

Turismo's location search feature allows you to explore any destination worldwide. Whether you're planning a beach vacation, a city escape, or a mountain retreat, Turismo provides you with a wealth of information on hotels, restaurants, and tourist attractions in your desired location.

Don't let unexpected weather conditions catch you off guard. Turismo integrates the Weather API, providing you with real-time weather updates for your chosen destination. Stay informed about the forecast and pack accordingly, ensuring you're prepared for any climate or season.

Turismo strives to be your trusted travel companion, offering a user-friendly interface, reliable information, and personalized recommendations. With our extensive resources and up-to-date data, you can embark on your next adventure with confidence. Start exploring the world with Turismo today and create memories that will last a lifetime.


Sunday 2023-07-09 15:12:06 by Crimdahl

Merging in Web version changes.

randomizer.py:

  • Wrapped the whole randomize process in a try/except block to pass exceptions back to GUI/Web
  • Added support for custom coral names from web
  • Added support for a custom playlist from web
  • Added support for a custom passwords from web

appearance.py:

  • Added support for custom coral names from web
  • Added validation to ensure there are enough male and female names

musicinterface.py:

  • Added support for a custom playlist from web

musicrandomizer.py:

  • Reordered imports
  • Added support for a custom playlist from web
  • Made tierboss section check case-insensitive

options.py:

  • Fixed an invalid default value for dancingmaduin

sillyclowns.py:

  • Added support for a custom passwords from web

formationrandomizer.py:

  • Import changes

appearance.py, dialoguemanager.py, esperrandomizer.py, itemrandomizer.py, locationrandomizer.py, monsterrandomizer.py, namerandomizer.py, randomizer.py, skillrandomizer.py, towerrandomizer.py, utils.py, wor.py:

  • Cosmetic changes to variable names and/or spacing

Sunday 2023-07-09 15:36:40 by Sudhanshu Rai

Update README.md

The project focuses on building a mental health tracker. You will try to get an idea of the mental state of your user (in the least intrusive ways), find out if they are suffering and then suggest measures they can take to get out of their present condition. A user answers some questions and based on the answers that they provide, you will suggest tasks to them and maintain a record of their mental state for displaying on a dashboard. Mental disorders are widespread in countries all over the world. Nevertheless, there is a global shortage in human resources delivering mental health services. Leaving people with mental disorders untreated may increase suicide attempts and mortality. To address this matter of limited resources, conversational agents have gained momentum in the last years. In this work, we introduce a mobile application with integrated Chabot that implements methods from cognitive behavior therapy (CBT) to support mentally ill people in regulating emotions and dealing with thoughts and feelings. Application asks the user on daily basis on events that occurred and on emotions. It determines automatically the basic emotion of a user from the natural language input using natural language processing and a lexicon-based approach. Depending on the emotion, an appropriate measurement such as activities or mindfulness exercises is suggested by application.


Sunday 2023-07-09 15:42:06 by LemonInTheDark

Optimizes timer insertion by 80% (W QDEL_IN micro) (#76214)

About The Pull Request

Reduces timer insertion cost by 80%

Timer name generation involved a LOT of string shit, some in ways where the string only existed for a moment. This costs a good bit of time, and can be reduced with only minimal impacts on the end product, so let's do that. Includes a compile flag to flip it back if we ever have trouble in future.

This is about 0.1s off init, since we do a lot of timer stuff then too

Removes STOPPABLE flag from QDEL_IN, moves it to a bespoke macro

Its a waste most of the time, tho I would LOVE to analyze at compile time to work out if we care

Why It's Good For The Game

I like it when we don't spend all of our cpu time just setting the name var on timers. that's good and not bad. This saves time fucking everywhere. 15% off explosions, 0.1 seconds off init, bunch of time off foam. it's just good.

Cherry picked out of #76104 since that was too cluttered (sannnnnn)


Sunday 2023-07-09 15:42:06 by LemonInTheDark

Adds border smoothing! (Look ma I'm upstreaming) (#76134)

About The Pull Request

Ok so we currently have 1 (count em) border object that wants to smooth with other border objects. That's the tram window.

It currently does this manually, via map edits, but that's kinda crappy so lets be better.

This pr adds a new smoothing mode to handle border objects. Unlike other smoothing modes, it returns a bitfield of directions the border object connects in.

I do this by memorizing a calculation of which dirs "connect" at init, and reading out of a global list with border object direction, direction between objects, and if it's a border object, the other object's dir.

I'm doing this primarily because it's become useful for wallening (a spriter saw the tram thing and is doing the same thing to pod windows, and I want to support that)

I do think it's potentially useful in other applications too tho, and I like dehardcoding tram windows.

Also fun bonus (or maybe downside), it's nearly 0 cost because I pulled the bitmask smoothing define into 2 subdefines, and am swapping the handler one out to do what I want. Oh also I got rid of a for loop in smoothing code, redundant and costs time in list iteration

Moves tram windows over to the new border object smoothing

Also replaces some typepath chicanery with a setDir override, for redundancy in future Oh and there's a update paths script too, to be nice

Why It's Good For The Game

More visual possibility in future, fixes a hack we have currently, and makes some spriters happy.

Changelog

🆑 fix: Dehardcodes some stuff with tram windows, they'll be easier to map with now refactor: Border objects can now smooth with each other. I'm sure something cool will come of this /🆑


Sunday 2023-07-09 16:46:42 by Nick Sullivan

Enhanced file context generation in prompts

This commit ain't just about adding or removing a few lines of code. It's about giving a damn about the context. We've added a function to generate a directory structure, and we're using it to provide a more comprehensive context for the sidekick prompt. Now, not only do you get the relevant files, but you also get a nice little map of the directory structure. Ain't that something?

And let's not forget about the files. We've improved the way we handle them too. If there ain't no files, we don't just leave you hanging. We give you the directory structure and call it a day. But if there are files, oh boy, we give you the whole shebang. We read 'em, we count 'em, and we warn you if they're too damn big.

So, in summary, we've made the file context generation in prompts more informative and more useful. And if you don't appreciate that, well, I don't know what to tell you.


Sunday 2023-07-09 16:55:28 by kmtahsin

Create Problem

One hot summer day Pete and his friend Billy decided to buy a watermelon. They chose the biggest and the ripest one, in their opinion. After that the watermelon was weighed, and the scales showed w kilos. They rushed home, dying of thirst, and decided to divide the berry, however they faced a hard problem.

Pete and Billy are great fans of even numbers, that's why they want to divide the watermelon in such a way that each of the two parts weighs even number of kilos, at the same time it is not obligatory that the parts are equal. The boys are extremely tired and want to start their meal as soon as possible, that's why you should help them and find out, if they can divide the watermelon in the way they want. For sure, each of them should get a part of positive weight.

Input The first (and the only) input line contains integer number w (1 ≤ w ≤ 100) — the weight of the watermelon bought by the boys.

Output Print YES, if the boys can divide the watermelon into two parts, each of them weighing even number of kilos; and NO in the opposite case.


Sunday 2023-07-09 17:23:51 by san7890

Loads Away Missions for Unit Testing (#76245)

About The Pull Request

Hey there,

A pretty bad bug (#76226) got through, but it was fixed pretty quickly in #76241 (cf92862daf339e97c76b52c91f31d49ba5113bd4). I realized that if we were testing all the away missions, that this could theoretically get caught and not happen again. Regardless, unit testing gateway missions has been on my to-do list for a while now, and I finally got it nailed down.

Basically, we just have a really small "station" map with the bare bones (teeny bit of fluff, maploading is going to take 30 seconds tops anyways let me have my kicks) with a JSON map datum flag that causes it to load all away missions in the codebase (which are all in one folder). Just in case some admins were planning on invoking the proc on SSmapping, I also decided to gate a tgui_alert() behind it because you never can be too sure of what people think is funny these days (it really does lock up your game for a second or so at a time).

I also alphabetized the maps.txt config because that was annoying me.

Why It's Good For The Game

Things that break on production could(?) be caught in unit testing? I don't know if the linked issue I mentioned above would have been caught in retrospect, but it's likely to catch more than a few upcoming bugs (like the UO45 atmospherics thing at the very top) and ensure that these gateway missions, which tend to be the most neglected part of mapping, stay bug-free.

This is also helpful in case someone makes a new away mission and wants to see if stuff's broken. Helps out maptainers a bit because very, very technically broken mapping will throw up runtimes. Neato.

Changelog

Nothing that players should be concerned about.

Let me know if there's a better way to approach this, but I really think that having a super-duper light map with the bare basics to load up gateway missions and then all nine-ish gateway missions can sequentially load during init. I can't think of a better way to do it aside from some really ugly #ifdef shit. Also also, it has the added benefit of being a map that will always load your away mission without touching a single shred of config (and it's not likely to break if you follow sane practices such as making your own areas)


Sunday 2023-07-09 17:23:51 by Jacquerel

Various spider fixes (#76528)

About The Pull Request

Fixes #76484 Then I noticed some weird stuff which slipped through the PR and poked at that too.

  • Spiderlings and Spiders once more have names ending in (###)
  • Removed an unused property on Spiderlings.
  • Rewrote the descriptions for a bunch of web-abilities and web-objects to be clearer and have better capitalisation.
  • Refactored the "Web Carcass" ability to not extend from "lay web" as it didn't need to perform most of that behaviour.
  • Also I renamed it and made the description give you a hint about why you would want to instantly spawn a statue.
  • The web effigy now despawns at the same rate as the ability cools down so you're not dumping spider statues all over the place.
  • I made spiderlings move at about the same speed as humans except if they're on webs in which case they're still pretty fast.

To be honest I am not certain an instant statue spawning button is great to begin with and I didn't even know it was added to the game but I am not interested in messing much with the balance for now.

This made me look at spiderlings enough that I'm going to try and make a new sprite for them that isn't awful.

Why It's Good For The Game

Lets you differentiate individual spiders a little bit. Makes usage of abilities clearer.

Changelog

🆑 balance: Guard spider web statues despawn as the ability comes back off cooldown. balance: Spiderlings now only move at light speed if they're on webs, stay safe little guys. fix: Spiders once again have random numbers after their names. /🆑


Sunday 2023-07-09 17:36:42 by Arturlang

Updates TGUI and adds bin folder for .bat scripts (#2011)

About The Pull Request

Updates TGUI and build tools and .vscode files to what TG has. Does not actually update UI's, but does have fixes for a couple including the join game UI's tabs not working.

Why It's Good For The Game

Not needing to have a local installation of yarn to run dev-mode is nice. Updating TGUI is a annoying chore that helps in the future when porting more interfaces

Changelog

🆑 code: Adds a bin folder with dev scripts, updates TGUI, .vscode folder to what TG has. fix: Fixes the input in the bottom right being white in darkmode, no more unreadable text fix: You can now use the tab buttons in the join ship menu. qol: The outpost mission menu now looks a whole lot better fix: The input bar no longer randomly becomes white and unreadable on darkmode /🆑


Co-authored-by: Mark Suckerberg [email protected]


Sunday 2023-07-09 17:46:43 by Bob

fuck you fuck you fuck you fuck you fuck you fuck you fuck you fuck you


Sunday 2023-07-09 19:36:13 by Joseph Frazier

Disable zoom on mobile devices to avoid unescapeable map issue (#346)

This fixes josephfrazier/reported-web#344 by preventing zoom entirely, on mobile devices:

From https://reportedcab.slack.com/archives/C9VNM3DL4/p1657204889174239?thread_ts=1656783831.210109&cid=C9VNM3DL4:

UPDATE: you appear to have fixed #2 by disabling pinching, but this also makes map view crash easily (on my first two tries, once it crashed and sent me back to Reported login, then it crashed and sent me to a Chrome "Can't open this page" (edited)

I don't think I've made any changes relating to pinching or the map in at least a year (if you're curious, you can see the list of changes here), so I'm not sure how to explain the difference in behavior you're seeing. That said, I can try to discuss the issue:

(2) map view issue #1: The only ways to get out of map view are (1) click one of the buttons on the bottom, or (2) touch the grayed-out part of the underlying page. If neither of those is visible, you can't get out of map view. This means that if you pinch the map out (expand it), even accidentally, so that the margin around the map itself is no longer visible, there is no way to get out of map view, you have to reload the page (and re-upload your photo or maybe start the whole report over, I can't remember). (You can't pinch it back in if the entire window is full of map.)

I'm curious how you were previously able to pinch-zoom the map/page such that neither the buttons or the margin were visible. When I try (on Android), I can either pinch the map, which only zooms the map while leaving the rest of the page alone, or I can pinch the margin, which zooms the entire page, but the margin is still visible because I'm pinching it. Perhaps things behave differently on iOS?

Actually the map zoom issue is: if you are zoomed out at all when you enter map view (which you often are, especially on an iPad, and you may not realize it), you’re screwed, because you can’t zoom in (in order to see the buttons or margins, the things you have to click in order to get out of map view) if the only things your fingers can touch are the map itself

ohhh, I see now, and I was able to reproduce that issue. I wonder if there's a way to detect the zoom level, and prompt the user to zoom back out all the way before they can open the map (side note, I think we're using "zoom in" and "zoom out" with opposite meanings. To me "zooming in" makes the things on screen bigger, and "zooming out" makes them smaller)


Sunday 2023-07-09 20:48:33 by Viktoria Ershova

Merge pull request #336 from SiverDX/fixes

Set up the mod, will do the Curse Forge stuff once this is merged (since it depends on the harvest changes)

This PR now affects the following:

Fixed some issues regarding abilities (GUI)

  • Tooltip coloring for increasing levels
  • Experience gained from decreasing was 1 level lower than intended
  • There were 2 additional SkillProgressButtons being rendered (but overlayed by other buttons)
  • Right side of the experience cost for abilities had the wrong start location
  • When increasing or decreasing ability levels all were shown as obtainable (until screen was re-opened)

Other

  • Removed bad omen application when you kill a knight (only occurred as a non-dragon)
  • Fixed some issue with water cauldron block (sea dragon gained no good mana condition from it)

Fixed a bug with the Claws and Teeth description

  • The harvest level text was not accurate (starting from gold? it was 1 level too high)

Fixed issues with harvesting

  • Modded ore support for drops when only using paws (most modded ores should already work but this fixes certain other instances, mostly MCreator related)
  • Config for bonusUnlockedAt is now respected
  • ServerConfig.baseHarvestLevel is now actually used
  • Minimum level for base and bonus level is now 0 (wood)
  • When the tool in the claw inventory breaks block-breaking will no longer desync
  • Block breaking speed is only applied if the dragon is effective against the block (e.g. cave dragon can mine pickaxe-blocks faster but not shovel-blocks)

Removed Better Combat animation compatibility

(This will not cause Better Combat to now crash with Dragon Survival - that was a separate issue)

  • Removed config

Sunday 2023-07-09 21:16:12 by Llinos Evans

Elite Four + Champion refights

This was an interesting thought exercise.

As you can see, this code adds a scaling function for when you reach the KEP post-game. These teams are very powerful and a strong test of the player's skills.

I did, however, remove the Legendary Pokemon from the first three's teams. They felt a little bit out there and kind of decrease the significance of the player's. Plus, we just went through hell adding conditional Hall of Fame respawns.

The Elite Four use refight text based on their LGPE appearances. It's honestly a little odd, so if it's too much, the FRLG conditional text could be used instead. I like the way Agatha's plays out, though...

Anyway, stupid hacky implementation done, please don't look at how I did the champion's trainer number.


Sunday 2023-07-09 21:22:04 by treckstar

People listen up don't stand so close, I got somethin that you all should know. Holy matrimony is not for me, I'd rather die alone in misery.


Sunday 2023-07-09 22:34:45 by san7890

Welding Fuel Tanks now log_bomber when hit by projectile (#75885)

About The Pull Request

This was intended behavior, but I think a lot of bullshit over the years sorta corrupted this proc's intention. Anyways, we just override the whole ass proc for this one check, give a good return value (or at least the same one that we were always giving) if our criteria is met, rather than deal with the problems that parent was feeding us.

image

The specific issue here was that the parent of bullet_act() was invoking take_damage() which prematurely invoked boom() which invokes qdel(), meaning that the QDELETED() check would always early return without fail every time.

Let's just do our own thing here.

Why It's Good For The Game

image

Intended behavior actually works.

Changelog

🆑 admin: Shooting a welding fuel tank (big or small) with a projectile will now accurately post to list_bombers with the name of the person who shot the projectile from the gun. If you don't know how to list-bombers, it will also be present in game.log for your viewing pleasure. /🆑


Sunday 2023-07-09 23:23:32 by necromanceranne

Replaces the Reaper Scythe with the Vorpal Scythe (also the Morbid trait) (#75948)

adds the Vorpal Scythe, a special chaplain null rod variant, replacing the Reaper Scythe, a not so special null rod variant.

When you choose the vorpal scythe, it comes as a shard that you implant into your arm, similar to a cursed katana.

Once implanted, you can draw it at any time like an arm implant. However, sheathing it again presents some problems. (Also, implanting the organ gives you TRAIT_MORBID, which I'll explain in a bit)

The Vorpal Scythe has 10 force, one of the weakest null rod variants for force that isn't a joke null rod. However, it has exceptional armor pen and also has 2 tiles of reach. So quite unique.

It also has a special beheading ability when you right-click someone. This borrows some code from amputation shears, functioning pretty similarly, except with a few additional ways to speed up the action and restrictions. (It takes 15 seconds baseline to behead someone standing and conscious, and speeds up or slows down based on factors such as incapacitation and whether or not our scythe is already empowered)

When you successfully behead someone with a mind, the vorpal scythe gains 20 force and can be safely stowed and drawn for 2 minutes. Performing more death knells like this will reset the timer.

If it has not performed its 'death knell', or you haven't hit a living mob, then it will cause severe damage to you if you ever try and stow it (or its forced back into your arm). Just hitting a mob with the scythe will sate it for 4 minutes. Unless it is a non-player monkey. Horrible things. Just hitting mobs does not reset the timer on empowerment.

What this means is that the chaplain may be more hesitant to simply draw their weapon on people. It also means that potentially, the chaplain will not always have magic immunity, since they may end up stowing the weapon away and be reluctant to draw it on a whim without either taking damage for sheathing it without hitting something, or dealing with having one less hand up until they can.

While empowerment only happens when you behead mobs with a mind, beheading monkeyhumans and other mindless humans subtypes causes their heads to become haunted! It's mostly harmless and largely just SpOoKy. We don't want heads with actual players in them to go floating off to space. (Does not work on monkey heads for sanity reasons)

When you have the Morbid trait, you think creepy stuff is cool and hate saving peoples lives. You get a mood boost from graverobbing, autopsies, dissections, amputations (including beheadings with the scythe and amputations with the shears) and revival surgery. However, you get a mood penalty when you tend wounds on the living, as well as a hefty penalty when you perform CPR or defibrillate someone. I was thinking Victor Frankenstein when I was choosing which actions had an associated moodlet, so anything that I might have missed would be appreciated.

You also count as potentially cool with regards to haunted objects. Ghosts think you're neat. (Revenants probably will still kill you if they had the chance)


Sunday 2023-07-09 23:45:39 by SapphicOverload

IPC tweaks (#19467)

  • funny tv head robot go brrrrr

  • Update IPC.dm

  • not that fast

  • fuck it we ball


< 2023-07-09 >