Skip to content

Latest commit

 

History

History
722 lines (517 loc) · 50.3 KB

2021-02-15.md

File metadata and controls

722 lines (517 loc) · 50.3 KB

< 2021-02-15 >

2,779,957 events, 1,346,559 push events, 2,112,210 commit messages, 163,074,307 characters

Monday 2021-02-15 00:12:49 by hawksbydesign

Adds fonts, sets up basic electron shit

it's 12:12 in the morning and I'm so fucking tired


Monday 2021-02-15 01:41:36 by James Elliott

Add initial_mrkdwn and replace_assets options. (#9)

  • Add initial_mrkdwn and replace_assets options.

These are the last two features I need for this action to be perfect for my current continuous deployment flow! I want to be able to specify markdown to use as a template when creating the initial release, but then be able to hand-edit it later, and be sure that my edits will not be destroyed by future action runs. This new option is a backwards-compatible way of achieving that. The old body_mrkdwn option works the same as it always has, but you can now specify an initial_mrkdwn value that is used only when creating a new release. If you don't specify initial_mrkdwn, then body_mrkdwn is used instead for the new release, and if neither was given, your standard fallback string is still used.

But if you specify only initial_mrkdwn and no body_mrkdwn, then the markdown text is only used for creating new releases, and existing release descriptions are left unmodified.

The second argument, replace_assets, controls whether to do the new feature you added this weekend. My jobs run in two different modes, because they are based on the Maven package management system which is the foundation of the Java world. Snapshot releases are works in progress, and can be released many times. Their assets get updated with each release. But a non-snapshot release is final, and its artifacts never change. So I want to replace assets when building snapshot releases, but flag it as an error if anyone ever attemts to replace assets for a non-snapshot release. And my jobs can determine the kind of release from the most recent git tag.

  • Remove redundant text

Sorry, I was up too late!

Co-Authored-By: Xotl [email protected]

  • Fix input name

Nice catch, I should have reviewed this the next morning.

Co-Authored-By: Xotl [email protected]

Co-authored-by: Xotl [email protected]


Monday 2021-02-15 02:01:38 by Jean-Paul R. Soucy

New data: 2021-02-14: See data notes.

Revise historical data: cases (AB, MB, ON, SK).

Distributed vaccine numbers from PHAC up to February 11 were updated today. They have been added retroactively.

Toronto (ON) did not report any cases or deaths today due to ongoing issues regarding the transition of their data system.

Note regarding deaths added in QC today: “The data also report 15 new deaths, but the total of deaths amounts to 10,214 due to the withdrawal of 2 deaths that the investigation has shown not to be attributable to COVID-19. Among these 15 deaths, 2 have occurred in the last 24 hours, 11 have occurred between February 7 and February 12 and 2 have occurred before February 7.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.

Recent changes:

2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:

  1. Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
  2. Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
  3. The file codebook.csv has been moved to the directory “other”.

We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.

  • 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.

Vaccine datasets:

  • 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
  • 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
  • 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
  • 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.

https://www.quebec.ca/en/health/health-issues/a-z/2019-coronavirus/situation-coronavirus-in-quebec/#c47900

Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.

SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.

For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.


Monday 2021-02-15 06:41:52 by Maldaris

Shuttle in a Box (#760)

  • add shuttle components to cargo crates under engineering

  • money money money motherfucker

  • fuck you zanos

  • a capitalized letter

  • more things

Co-authored-by: Shayne Fitzgerald [email protected]


Monday 2021-02-15 07:01:00 by Hazim Arafa

stupid ass f2 fukin dumb unicode shit ass lookin bubblegum dum dum pickle chin lookin head ass


Monday 2021-02-15 07:40:17 by Kabraxis

EAT THIS BOB!! :D :D Running my code through Hyperskill I got me a failure because I didn't read the description good enough. The task was to recognize the parameters and their values, but I thought there would only be the values of pass, port, cookie and host. As they checked my code to recognize the parameter "name" and its value "Bob"... well, one month of coding and thinking felt wasted at first. I was disappointed. But I remembered that learning to code is a journey, a hard one and my time was never wasted, since I learned so much even if it lead to a failure at first. Then I felt challenged. I didn't want to let this - to be honest, not existing - guy Bob win over me. Not after this amount of time I put into this coding challenge. Not after coding for more than a year now. So I got my stuff together and rewrote the WHOLE code within under hours (~2h40m), made it even better than ever ever ever before and BOOM baby! I mastered this challenge 8-)


Monday 2021-02-15 07:53:26 by zeebe-bors[bot]

Merge #6322 #6324

6322: chore(deps): update version.testcontainers to v1.15.2 r=npepinpe a=renovate[bot]

WhiteSource Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
org.testcontainers:toxiproxy (source) 1.15.1 -> 1.15.2 age adoption passing confidence
org.testcontainers:elasticsearch (source) 1.15.1 -> 1.15.2 age adoption passing confidence
org.testcontainers:testcontainers (source) 1.15.1 -> 1.15.2 age adoption passing confidence

Release Notes

testcontainers/testcontainers-java

Compare Source

What's Changed
  • What 1984 means to you? To us, this number means PR #​1984, one of the oldest PRs we had open and... finally merged! 😅 Thanks to an amazing contribution by @​seglo, we now provide an example of testing Kafka clusters where multiple KafkaContainers are connected into one network. Try it!
  • Another "old" PR is #​3180 by @​oussamabadr. Those of you who run Selenium tests with Testcontainers will appreciate this newly added option to use (scrollable!) MP4 format instead of FLV.
  • The connection with Ryuk (our watchdog sidecar container) now sets the socket timeout and retries the failures - helps with some rare networking edge cases. (#​3682) @​diegolovison
  • The logs consumer no longer adds extra new lines thanks to #​3752 by @​perlun
  • Locally built images no longer get affected by the hub.image.name.prefix setting! (#​3666) @​reda-alaoui
  • Jackson dependency is now forced to an older version to help with NoClassDefFoundError (com/fasterxml/jackson/annotation/JsonMerge).

And more!

🚀 Features & Enhancements
🐛 Bug Fixes
📖 Documentation
🧹 Housekeeping
📦 Dependency updates

Renovate configuration

📅 Schedule: At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by WhiteSource Renovate. View repository job log here.

6324: chore(deps): update dependency org.codehaus.mojo:animal-sniffer-annotations to v1.20 r=npepinpe a=renovate[bot]

WhiteSource Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
org.codehaus.mojo:animal-sniffer-annotations 1.19 -> 1.20 age adoption passing confidence

Renovate configuration

📅 Schedule: At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by WhiteSource Renovate. View repository job log here.

Co-authored-by: Renovate Bot [email protected]


Monday 2021-02-15 08:03:46 by Ryan Kelly

A little experiment with macros instead of build scripts.

This commit is basically me messing around with procedural macros, to try them out as a potential future iteration of the UniFFI developer experience.

As a crate author, I think it would be amazing not to have to worry about .udl files and built scripts and what-not, but instead be able to declare my API with a macro and have Rust just take care of the details. This commit is a tiny experiment in that direction, based on existing features of UniFFI.

The idea is that I can wrap my Rust code in a little macro like this:

[uniffi_macros::declare_interface]
pub mod my_component {
  ...my rust code goes here...
}

And I can provide a corresponding my_component.udl file that declares the interface, and all the Rust scaffolding code just gets taken care of automagically.

I still have to provide a .udl file, but I hope you can imagine one day the possibility of generating the interface definition directly from the Rust code that is decorated by the macro. Not any time soon! But one day...


Monday 2021-02-15 08:04:49 by Ryan Kelly

A little experiment with macros instead of build scripts.

This commit is basically me messing around with procedural macros, to try them out as a potential future iteration of the UniFFI developer experience.

As a crate author, I think it would be amazing not to have to worry about .udl files and build scripts and what-not, but instead be able to declare my API with a macro and have Rust just take care of the details. This commit is a tiny experiment in that direction, based on existing features of UniFFI.

The idea is that I can wrap my Rust code in a little macro like this:

[uniffi_macros::declare_interface]
pub mod my_component {
  ...my rust code goes here...
}

And I can provide a corresponding my_component.udl file that declares the interface, and all the Rust scaffolding code just gets taken care of automagically.

I still have to provide a .udl file, but I hope you can imagine one day the possibility of generating the interface definition directly from the Rust code that is decorated by the macro. Not any time soon! But one day...


Monday 2021-02-15 08:47:38 by Khoa Nguyen

NEW ITEMS WOO I LOVE MY LIFE THIS IS NOT TEDIOUS AT ALL


Monday 2021-02-15 09:04:55 by Douglas Anderson

pinctrl: qcom: Don't clear pending interrupts when enabling

In Linux, if a driver does disable_irq() and later does enable_irq() on its interrupt, I believe it's expecting these properties:

  • If an interrupt was pending when the driver disabled then it will still be pending after the driver re-enables.
  • If an edge-triggered interrupt comes in while an interrupt is disabled it should assert when the interrupt is re-enabled.

If you think that the above sounds a lot like the disable_irq() and enable_irq() are supposed to be masking/unmasking the interrupt instead of disabling/enabling it then you've made an astute observation. Specifically when talking about interrupts, "mask" usually means to stop posting interrupts but keep tracking them and "disable" means to fully shut off interrupt detection. It's unfortunate that this is so confusing, but presumably this is all the way it is for historical reasons.

Perhaps more confusing than the above is that, even though clients of IRQs themselves don't have a way to request mask/unmask vs. disable/enable calls, IRQ chips themselves can implement both. ...and yet more confusing is that if an IRQ chip implements disable/enable then they will be called when a client driver calls disable_irq() / enable_irq().

It does feel like some of the above could be cleared up. However, without any other core interrupt changes it should be clear that when an IRQ chip gets a request to "disable" an IRQ that it has to treat it like a mask of that IRQ.

In any case, after that long interlude you can see that the "unmask and clear" can break things. Maulik tried to fix it so that we no longer did "unmask and clear" in commit 71266d9d3936 ("pinctrl: qcom: Move clearing pending IRQ to .irq_request_resources callback"), but it only handled the PDC case and it had problems (it caused sc7180-trogdor devices to fail to suspend). Let's fix.

From my understanding the source of the phantom interrupt in the were these two things:

  1. One that could have been introduced in msm_gpio_irq_set_type() (only for the non-PDC case).
  2. Edges could have been detected when a GPIO was muxed away.

Fixing case #1 is easy. We can just add a clear in msm_gpio_irq_set_type().

Fixing case #2 is harder. Let's use a concrete example. In sc7180-trogdor.dtsi we configure the uart3 to have two pinctrl states, sleep and default, and mux between the two during runtime PM and system suspend (see geni_se_resources_{on,off}() for more details). The difference between the sleep and default state is that the RX pin is muxed to a GPIO during sleep and muxed to the UART otherwise.

As per Qualcomm, when we mux the pin over to the UART function the PDC (or the non-PDC interrupt detection logic) is still watching it / latching edges. These edges don't cause interrupts because the current code masks the interrupt unless we're entering suspend. However, as soon as we enter suspend we unmask the interrupt and it's counted as a wakeup.

Let's deal with the problem like this:

  • When we mux away, we'll mask our interrupt. This isn't necessary in the above case since the client already masked us, but it's a good idea in general.
  • When we mux back will clear any interrupts and unmask.

Fixes: 4b7618fdc7e6 ("pinctrl: qcom: Add irq_enable callback for msm gpio") Fixes: 71266d9d3936 ("pinctrl: qcom: Move clearing pending IRQ to .irq_request_resources callback") Signed-off-by: Douglas Anderson [email protected] Reviewed-by: Maulik Shah [email protected] Tested-by: Maulik Shah [email protected] Reviewed-by: Stephen Boyd [email protected] Link: https://lore.kernel.org/r/20210114191601.v7.4.I7cf3019783720feb57b958c95c2b684940264cd1@changeid Signed-off-by: Linus Walleij [email protected]


Monday 2021-02-15 11:30:45 by abnernat

refactor(.travis.yml): rename main branch to master, fuck your PC thing


Monday 2021-02-15 14:49:21 by Manuel Aristarán

Revert "fuck you quarantine"

This reverts commit a79bf733cbd0538a3e64f3723a61852a5b96be36.


Monday 2021-02-15 15:41:54 by DivineEntity01

Update Simple Swarm(Stable).lua

Added windy and vicious, decided to release this script because it has all the features i wanted to implement in a bee swarm simulator GUI, all of em work as i expected, may, or may not add other stuff in the future, probably not, cause i'm likely to get banned for releasing this script, damn, Please onett don't ban me i didn't use hacks in your game 😭 🙏 , ok and so star catches is not, so accurate, it cathes like 1, 2 lights, if you are lucky it will catch all of em, and for the autofarm, useless piece of shit, i tried everything but couldn't get the token collector to work correctly, yes i know, this is practically a yandere dev code, and i don't really know a lot of LUA... IT'S DECENT OK? I TRIED MY BEST


Monday 2021-02-15 15:55:04 by Milo Weinberg

ok fuck it not gonna store token as a string fuck you mc


Monday 2021-02-15 16:13:20 by Wilmer Adalid (Alienware)

Updates for: "For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" -- Post Brothers comics


Monday 2021-02-15 16:25:17 by Whizzer

rip_flavor_events

King Alfonso VI of Léon may now be recommended by his court phycisian to 'take it easy' outside the learning scenario (RIP.4010) You can now attempt to fall in love with a close relative of a friend who died, if you are both okay with incest (RIP.29020), as well as when you are trying to dance with a courtier while one-legged (RIP.29400) Event RIP.29801 and subsequent events can now fire for AI Eliminated RIP.30221 from 'Chess with Death'-event chain Restored event title for RIP.30301 (immortal rival found you)


Monday 2021-02-15 17:54:33 by Francis Jeschke

Delete 035a - O LORD, I Love You, God, My Strength - Psalm 18


Monday 2021-02-15 19:52:56 by Chris Maguire

Allow effects to reference parents

Having each attack effect (e.g. slice, burn, shock, electrocute, poison) be its own process is a pain: what if an electric sword depends on a main effect (e.g. slice) connecting (successful hit roll) before it can affect the target? Plus, I want each application of an effect to be handled by its own process: I don't want a single one-per-weapon attack effect (e.g. poison) to be managing the effects of one poisoned sword on several different targets: if you hit something with a poisoned sword and the poison takes hold, then you have a process devoted solely to applying poison to that one target.

I was going to have one attack process configured to kick off multiple effects, but I think I'm going to have parent effects referenced by child effects. In that case I'll need each effect's properties in a dedicated process; however, if I want one process per affected target, then I need those effect processes to be prototype processes (or parents in another sense) where the "property holder process" spins off a copy of itself each time the effect is applied to a target. This means I need to have a prototype version of the effect process and an instance of the effect process.

So:

  1. player initiates attack on target "T" with an attack object/process, e.g. sword "S"

  2. Prototype cut effect "PC" sees an attack by it's parent attack "S" and creates an instance of itself "IC" and tells it to attack "T"

  3. If IC manages to hit "T": a) Prototype poison effect "PP" sees that its parent effect, "IC", hits "T" and creates an instance of poison "IP" b) "IP" calculates its own hit roll on "T"

In this way, I could have effects that are children of effects which are also children of other effects, creating a cascade of effects based on a single attack.

Each effect can calculate its own hit roll, thus allowing any other nearby process to mitigate that hit or effect.

One thing that will be difficult, but is probably over-engineering at this point, is responding to a combination of effects: e.g. if I get poisoned and burned then my magic ring kicks in and nullifies one of them (edited)

(Actually, that would be easy: the magic ring can listen to effect events and turn it's logic on or off depending on its current state) (edited)


Monday 2021-02-15 19:52:56 by Chris Maguire

WIP: moving most logic from attack to effect

Moving the attacks over to various effects still makes sense; however, moving resource allocation doesn't: are there any effects of an attack that have different resources? If so, how do we coordinate them? If the "slice" effect of a sword has the resources it needs, does it wait for whatever resources "shock", "burn" or "poison" might need? (A creative person could always come up with a scenario where something like this could be possible). Having a parent effect wait for the resources required for a dpeendent effect sounds like over-engineering.

My go-to example is a flaming sword: it has a main effect of slicing (or whatever) but it has a secondary, dependent effect of burning (if you don't hit with the sword then you also don't burn with it.

Does the "burn" effect have it's own resources? Does it need to request resources every time it applies itself (if it's an on-going/repeating effect)?

I don't think so. The idea of an "attack" is that it's some kind of offensive action like a weapon, or a spell. We can't activate the offensive without the appropriate resources required to kick off any of the effects. So, the attack should wait for resources (e.g. stamina and focus for a sword) and then tell the effects to kick off. Dependent effects will wait for parent effects to hit first before kicking into action.

So, I'm going to move the resources back to the attack handler from the effect handler.

Having effects handle their own rolls to hit and calculate effect amount, and having the hits cascade (i.e. the dependency: if A hits then B might hit) still makes sense.

NOTE:

I need to remember that I'm ignoring time, CPU and transactionality: things don't have to happen exactly in order and groups of effects don't need to be atomic: if you manage to shock something twice when you've only hit it with a sword once, well good for you. Life is messy.


Monday 2021-02-15 20:12:21 by IgnacioCarol

Rename a file that was missnamed before

fuck you licha


Monday 2021-02-15 20:31:27 by ahd

finally a responsive layout, omg i spent 6 hours on this i hate myself


Monday 2021-02-15 20:38:14 by Dan Church

cron: Fix clean-chrome-tmp.sh

Wasn't properly deleting all it should.

Changes it to assume if there isn't a socket in the dir, it's not in use.

Fuck you chrome.


Monday 2021-02-15 20:38:56 by Andrew Lamb

ARROW-11289: [Rust][DataFusion] Implement GROUP BY support for Dictionary Encoded columns

This PR adds support for GROUP BY with for columns of Dictionary type.

The code basically just follows the pattern (aka is mostly copy/paste) from the take kernel: https://github.com/apache/arrow/blob/master/rust/arrow/src/compute/kernels/cast.rs#L294

I chose the "correct first, optimize later" approach here -- there are many ways to make this code faster, especially when grouping on string types.

It feels like a lot of copy/paste in my mind and I would love any thoughts / suggestion about refactoring out the recurring pattern of switch and dispatch for structured types (like Dictionary and List)

Closes #9233 from alamb/alamb/gby_dicts

Authored-by: Andrew Lamb [email protected] Signed-off-by: Andrew Lamb [email protected]


Monday 2021-02-15 20:45:28 by Cosma George

Implemented SubCommands and SubCommandGroups properly.

Details: To implement them I had to get creative. First thing i did was manually register a command that uses sub commands and sub command groups. Two things I noticed immediately:

  1. I can create a subcommand on a "root" command - where no SubCommandGroup is used
  2. The current implementation of the Interactions doesn't know what type of value an option is. Good thing is that there is only 1 option when querying subcommands and subcommand groups, so I can find out what the "path" of the subcommand is. TOP/root/rng TOP/root/usr/zero TOP/root/usr/johnny (i misspelled it in the source files, woops) [See SlashCommandsExample/DiscordClient.cs]

Next I wanted to make command groups (I'll use this term as to mean a slash command with subcommands and regular slash command groups) to be implemented in code in a sort of hierarchical manner - so I made them classes with attributes. Unfortunately to make this work I had to make them re-inherit the same things as the base module - UGLY but I see no other option to do this other than making them inherit from another class that remembers the instance of the upper class and implements the same methods aka a whole mess that I decided I won't want to partake in. [See SlashCommandsExample/Modules/DevModule.cs]

Next-up is to search for these sub-groups. I decided that the most intuitive way of implementing these was to make SlashModuleInfo have children and parent of the same type -- from which arose different problems, but we'll get to that. So I gave them some children and a parent and a reference to the CommandGroup attribute they have on themselves. The boolean isCommandGroup is unused, but could be useful in the future... maybe. Also I've added a path variable to internally store structure. I wanted (after the whole reflections business) for commands to be easly accessed and deal WITH NO REFLECTION because those are slow, so I changed the final string - SlashCommandInfo dictionary to containt paths instead of command infos, something like what I exemplefied above.

In any case, I edited the service helper (the search for modules method) to ignore command groups and only store top level commands. After that I made a command to instantiate command groups, and the command creation and registration were changed as to be recursive - because recurion is the simpest way to do this and it's efficient enough for what we want - we only run this once anyway.

The biggest change was with command building - commands no longer build themselves, but now we command each module to build itself. There are 3 cases: Top-Level commands Top-Level subcommands (or level 1 command group) subcommands within slash command groups

The code is uncommented, untidy and I'll fix that in a future commit. One last thing to note is that SlashCommands can have 0 options! - fixed that bug. Also SlashCommandBuilder.WithName() for some reason was implemented wrongly - I pressume a copy-paste error, Also I implemented 0 types of enforcing rules - I'm going to leave this to other people to do.


Monday 2021-02-15 21:55:40 by ochlocracy

A lot of random changes. My commit comments suck and I will hate myself later for it.


Monday 2021-02-15 22:44:13 by Jonathan

Changed some Fmod shit

Alkstrand wanted me to push this tho there is basicly no diferance. My life is pain i am i perpetual torment help help help


Monday 2021-02-15 23:37:16 by Fuglore

your skin is freezing...

HERE, LET ME HELP YOU TAKE IT OFF

  • fixed code being suck ass bad ugly

< 2021-02-15 >