2,779,957 events, 1,346,559 push events, 2,112,210 commit messages, 163,074,307 characters
Adds fonts, sets up basic electron shit
it's 12:12 in the morning and I'm so fucking tired
Add initial_mrkdwn and replace_assets options. (#9)
- Add initial_mrkdwn and replace_assets options.
These are the last two features I need for this action to be perfect
for my current continuous deployment flow! I want to be able to
specify markdown to use as a template when creating the initial
release, but then be able to hand-edit it later, and be sure that my
edits will not be destroyed by future action runs. This new option is
a backwards-compatible way of achieving that. The old body_mrkdwn
option works the same as it always has, but you can now specify an
initial_mrkdwn
value that is used only when creating a new release.
If you don't specify initial_mrkdwn
, then body_mrkdwn
is used
instead for the new release, and if neither was given, your standard
fallback string is still used.
But if you specify only initial_mrkdwn
and no body_mrkdwn
, then
the markdown text is only used for creating new releases, and existing
release descriptions are left unmodified.
The second argument, replace_assets
, controls whether to do the new
feature you added this weekend. My jobs run in two different modes,
because they are based on the Maven package management system which is
the foundation of the Java world. Snapshot releases are works in
progress, and can be released many times. Their assets get updated
with each release. But a non-snapshot release is final, and its
artifacts never change. So I want to replace assets when building
snapshot releases, but flag it as an error if anyone ever attemts to
replace assets for a non-snapshot release. And my jobs can determine
the kind of release from the most recent git tag.
- Remove redundant text
Sorry, I was up too late!
Co-Authored-By: Xotl [email protected]
- Fix input name
Nice catch, I should have reviewed this the next morning.
Co-Authored-By: Xotl [email protected]
Co-authored-by: Xotl [email protected]
New data: 2021-02-14: See data notes.
Revise historical data: cases (AB, MB, ON, SK).
Distributed vaccine numbers from PHAC up to February 11 were updated today. They have been added retroactively.
Toronto (ON) did not report any cases or deaths today due to ongoing issues regarding the transition of their data system.
Note regarding deaths added in QC today: “The data also report 15 new deaths, but the total of deaths amounts to 10,214 due to the withdrawal of 2 deaths that the investigation has shown not to be attributable to COVID-19. Among these 15 deaths, 2 have occurred in the last 24 hours, 11 have occurred between February 7 and February 12 and 2 have occurred before February 7.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
Shuttle in a Box (#760)
-
add shuttle components to cargo crates under engineering
-
money money money motherfucker
-
fuck you zanos
-
a capitalized letter
-
more things
Co-authored-by: Shayne Fitzgerald [email protected]
stupid ass f2 fukin dumb unicode shit ass lookin bubblegum dum dum pickle chin lookin head ass
EAT THIS BOB!! :D :D Running my code through Hyperskill I got me a failure because I didn't read the description good enough. The task was to recognize the parameters and their values, but I thought there would only be the values of pass, port, cookie and host. As they checked my code to recognize the parameter "name" and its value "Bob"... well, one month of coding and thinking felt wasted at first. I was disappointed. But I remembered that learning to code is a journey, a hard one and my time was never wasted, since I learned so much even if it lead to a failure at first. Then I felt challenged. I didn't want to let this - to be honest, not existing - guy Bob win over me. Not after this amount of time I put into this coding challenge. Not after coding for more than a year now. So I got my stuff together and rewrote the WHOLE code within under hours (~2h40m), made it even better than ever ever ever before and BOOM baby! I mastered this challenge 8-)
Merge #6322 #6324
6322: chore(deps): update version.testcontainers to v1.15.2 r=npepinpe a=renovate[bot]
This PR contains the following updates:
Package | Change | Age | Adoption | Passing | Confidence |
---|---|---|---|---|---|
org.testcontainers:toxiproxy (source) | 1.15.1 -> 1.15.2 |
||||
org.testcontainers:elasticsearch (source) | 1.15.1 -> 1.15.2 |
||||
org.testcontainers:testcontainers (source) | 1.15.1 -> 1.15.2 |
testcontainers/testcontainers-java
- What
1984
means to you? To us, this number means PR #1984, one of the oldest PRs we had open and... finally merged! 😅 Thanks to an amazing contribution by @seglo, we now provide an example of testing Kafka clusters where multipleKafkaContainer
s are connected into one network. Try it! - Another "old" PR is #3180 by @oussamabadr. Those of you who run Selenium tests with Testcontainers will appreciate this newly added option to use (scrollable!) MP4 format instead of FLV.
- The connection with Ryuk (our watchdog sidecar container) now sets the socket timeout and retries the failures - helps with some rare networking edge cases. (#3682) @diegolovison
- The logs consumer no longer adds extra new lines thanks to #3752 by @perlun
- Locally built images no longer get affected by the
hub.image.name.prefix
setting! (#3666) @reda-alaoui - Jackson dependency is now forced to an older version to help with
NoClassDefFoundError (com/fasterxml/jackson/annotation/JsonMerge)
.
And more!
- Switch to Presto image hosted on GHCR (#3667) @findepi
- Implement getDatabaseName() in CockroachContainer (#3778) @croemmich
- Make recorder .flv videos scrollable (#512) (#3180) @oussamabadr
- Support HTTP headers on HttpWaitStrategy (#2549) @renatomefi
- Show port mappings in HttpWaitStrategy (#2341) @aguibert
- Support newer versions of CockroachDB by changing the docker command (#3608) @giger85
- Improve logging for port listener (#3736) @artjomka
- couchbase: wait until all services are part of the config (#3003) @daschl
- Support Ryuk socket timeout (#3682) @diegolovison
- Add init command parameter to Vault container (#3188) @tandrup
- Startables#deepStart with varargs (#3261) @jochenchrist
- Remove extra newlines in container log output (#3752) @perlun
- Fix handling of locally built images when used with
hub.image.name.prefix
(#3666) @reda-alaoui
- Kafka cluster example (#1984, #3758) @seglo
- Documentation PostGIS JDBC url sample version update (#3606) @aulea
- GenericContainer: fix typo in Javadoc (#3684) @perlun
- Added documentation for Bigtable Emulator container (#3708) @RamazanYapparov
- Clarify usage of host port exporting (#3421) @alxgrk
- Rename Presto to Trino (#3649) @martint
- update release drafter to v5.13.0 (#3632) @jetersen
- Remove duplicated dependency in jdbc-test module (#3664) @giger85
- add github actions to dependabot config (#3633) @jetersen
- Remove ciMate (#3631) @bsideup
- update release drafter to v5.13.0 (#3632) @jetersen
- Force Jackson version (#3602) @bsideup
- Bump aws-java-sdk-sqs from 1.11.884 to 1.11.930 in /modules/localstack (#3660) @dependabot
- Bump guava from 30.0-jre to 30.1-jre in /modules/jdbc-test (#3622) @dependabot
- Bump aws-java-sdk-s3 from 1.11.929 to 1.11.930 in /modules/localstack (#3659) @dependabot
- Bump org.jetbrains.kotlin.plugin.spring from 1.3.31 to 1.4.21-2 in /examples (#3646) @dependabot
- Bump gradle-update/update-gradle-wrapper-action from
74a035c
to 1.0.9 (#3653) @dependabot - Bump cucumber-junit from 6.8.1 to 6.9.1 in /examples (#3625) @dependabot
- Bump testcontainers from 1.14.3 to 1.15.1 in /core (#3587) @dependabot
- Bump transport from 7.10.0 to 7.10.1 in /modules/elasticsearch (#3590) @dependabot
- Bump aws-java-sdk-s3 from 1.11.882 to 1.11.929 in /modules/localstack (#3654) @dependabot
- Bump s3 from 2.15.14 to 2.15.56 in /modules/localstack (#3650) @dependabot
- Bump org.jetbrains.kotlin.jvm from 1.3.31 to 1.4.21-2 in /examples (#3645) @dependabot
- Bump guava from 30.0-jre to 30.1-jre in /core (#3627) @dependabot
- Bump ad-m/github-push-action from
68af989
to 0.6.0 (#3658) @dependabot - Bump aws-java-sdk-dynamodb from 1.11.901 to 1.11.929 in /modules/dynalite (#3652) @dependabot
- Bump gradle/wrapper-validation-action from
e2c57ac
to 1.0.3 (#3657) @dependabot - Update actions/cache requirement to v2.1.3 (#3651) @dependabot
- Bump solr-solrj from 8.6.3 to 8.7.0 in /examples (#3446) @dependabot
- Bump cucumber-java from 6.8.1 to 6.9.1 in /examples (#3624) @dependabot
- Bump r2dbc-mariadb from 0.8.4-rc to 1.0.0 in /modules/mariadb (#3593) @dependabot
- Bump org.springframework.boot from 2.3.4.RELEASE to 2.4.1 in /examples (#3594) @dependabot
- Bump jedis from 3.3.0 to 3.4.0 in /examples (#3595) @dependabot
- Bump tomcat-jdbc from 9.0.40 to 10.0.0 in /modules/jdbc (#3592) @dependabot
- Bump elasticsearch-rest-client from 7.10.0 to 7.10.1 in /modules/elasticsearch (#3589) @dependabot
- Bump jedis from 3.3.0 to 3.4.0 in /core (#3586) @dependabot
- Bump rest-assured from 4.3.1 to 4.3.3 in /modules/vault (#3588) @dependabot
- Bump r2dbc-mssql from 0.8.4.RELEASE to 0.8.5.RELEASE in /modules/mssqlserver (#3397) @dependabot
- Bump mockito-core from 3.5.15 to 3.6.28 in /modules/junit-jupiter (#3539) @dependabot
- Bump mockito-core from 3.5.13 to 3.6.28 in /core (#3542) @dependabot
- Bump mssql-jdbc from 9.1.0.jre8-preview to 9.1.1.jre8-preview in /modules/mssqlserver (#3563) @dependabot
- Bump pulsar-client-admin from 2.6.1 to 2.7.0 in /modules/pulsar (#3566) @dependabot
- Bump pulsar-client from 2.6.1 to 2.7.0 in /modules/pulsar (#3565) @dependabot
- Bump json from
2018081
to2020111
in /examples (#3567) @dependabot - Bump influxdb-java from 2.20 to 2.21 in /modules/influxdb (#3568) @dependabot
- Bump assertj-core from 3.18.0 to 3.18.1 in /modules/vault (#3486) @dependabot
- Upgrade Ryuk to 0.3.1 (#3629) @rnorth
📅 Schedule: At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
- If you want to rebase/retry this PR, check this box
This PR has been generated by WhiteSource Renovate. View repository job log here.
6324: chore(deps): update dependency org.codehaus.mojo:animal-sniffer-annotations to v1.20 r=npepinpe a=renovate[bot]
This PR contains the following updates:
Package | Change | Age | Adoption | Passing | Confidence |
---|---|---|---|---|---|
org.codehaus.mojo:animal-sniffer-annotations | 1.19 -> 1.20 |
📅 Schedule: At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻️ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- If you want to rebase/retry this PR, check this box
This PR has been generated by WhiteSource Renovate. View repository job log here.
Co-authored-by: Renovate Bot [email protected]
A little experiment with macros instead of build scripts.
This commit is basically me messing around with procedural macros, to try them out as a potential future iteration of the UniFFI developer experience.
As a crate author, I think it would be amazing not to have to worry about .udl files and built scripts and what-not, but instead be able to declare my API with a macro and have Rust just take care of the details. This commit is a tiny experiment in that direction, based on existing features of UniFFI.
The idea is that I can wrap my Rust code in a little macro like this:
[uniffi_macros::declare_interface]
pub mod my_component {
...my rust code goes here...
}
And I can provide a corresponding my_component.udl
file that
declares the interface, and all the Rust scaffolding code just
gets taken care of automagically.
I still have to provide a .udl
file, but I hope you can imagine
one day the possibility of generating the interface definition
directly from the Rust code that is decorated by the macro.
Not any time soon! But one day...
A little experiment with macros instead of build scripts.
This commit is basically me messing around with procedural macros, to try them out as a potential future iteration of the UniFFI developer experience.
As a crate author, I think it would be amazing not to have to worry about .udl files and build scripts and what-not, but instead be able to declare my API with a macro and have Rust just take care of the details. This commit is a tiny experiment in that direction, based on existing features of UniFFI.
The idea is that I can wrap my Rust code in a little macro like this:
[uniffi_macros::declare_interface]
pub mod my_component {
...my rust code goes here...
}
And I can provide a corresponding my_component.udl
file that
declares the interface, and all the Rust scaffolding code just
gets taken care of automagically.
I still have to provide a .udl
file, but I hope you can imagine
one day the possibility of generating the interface definition
directly from the Rust code that is decorated by the macro.
Not any time soon! But one day...
NEW ITEMS WOO I LOVE MY LIFE THIS IS NOT TEDIOUS AT ALL
pinctrl: qcom: Don't clear pending interrupts when enabling
In Linux, if a driver does disable_irq() and later does enable_irq() on its interrupt, I believe it's expecting these properties:
- If an interrupt was pending when the driver disabled then it will still be pending after the driver re-enables.
- If an edge-triggered interrupt comes in while an interrupt is disabled it should assert when the interrupt is re-enabled.
If you think that the above sounds a lot like the disable_irq() and enable_irq() are supposed to be masking/unmasking the interrupt instead of disabling/enabling it then you've made an astute observation. Specifically when talking about interrupts, "mask" usually means to stop posting interrupts but keep tracking them and "disable" means to fully shut off interrupt detection. It's unfortunate that this is so confusing, but presumably this is all the way it is for historical reasons.
Perhaps more confusing than the above is that, even though clients of IRQs themselves don't have a way to request mask/unmask vs. disable/enable calls, IRQ chips themselves can implement both. ...and yet more confusing is that if an IRQ chip implements disable/enable then they will be called when a client driver calls disable_irq() / enable_irq().
It does feel like some of the above could be cleared up. However, without any other core interrupt changes it should be clear that when an IRQ chip gets a request to "disable" an IRQ that it has to treat it like a mask of that IRQ.
In any case, after that long interlude you can see that the "unmask and clear" can break things. Maulik tried to fix it so that we no longer did "unmask and clear" in commit 71266d9d3936 ("pinctrl: qcom: Move clearing pending IRQ to .irq_request_resources callback"), but it only handled the PDC case and it had problems (it caused sc7180-trogdor devices to fail to suspend). Let's fix.
From my understanding the source of the phantom interrupt in the were these two things:
- One that could have been introduced in msm_gpio_irq_set_type() (only for the non-PDC case).
- Edges could have been detected when a GPIO was muxed away.
Fixing case #1 is easy. We can just add a clear in msm_gpio_irq_set_type().
Fixing case #2 is harder. Let's use a concrete example. In sc7180-trogdor.dtsi we configure the uart3 to have two pinctrl states, sleep and default, and mux between the two during runtime PM and system suspend (see geni_se_resources_{on,off}() for more details). The difference between the sleep and default state is that the RX pin is muxed to a GPIO during sleep and muxed to the UART otherwise.
As per Qualcomm, when we mux the pin over to the UART function the PDC (or the non-PDC interrupt detection logic) is still watching it / latching edges. These edges don't cause interrupts because the current code masks the interrupt unless we're entering suspend. However, as soon as we enter suspend we unmask the interrupt and it's counted as a wakeup.
Let's deal with the problem like this:
- When we mux away, we'll mask our interrupt. This isn't necessary in the above case since the client already masked us, but it's a good idea in general.
- When we mux back will clear any interrupts and unmask.
Fixes: 4b7618fdc7e6 ("pinctrl: qcom: Add irq_enable callback for msm gpio") Fixes: 71266d9d3936 ("pinctrl: qcom: Move clearing pending IRQ to .irq_request_resources callback") Signed-off-by: Douglas Anderson [email protected] Reviewed-by: Maulik Shah [email protected] Tested-by: Maulik Shah [email protected] Reviewed-by: Stephen Boyd [email protected] Link: https://lore.kernel.org/r/20210114191601.v7.4.I7cf3019783720feb57b958c95c2b684940264cd1@changeid Signed-off-by: Linus Walleij [email protected]
refactor(.travis.yml): rename main branch to master, fuck your PC thing
Revert "fuck you quarantine"
This reverts commit a79bf733cbd0538a3e64f3723a61852a5b96be36.
Update Simple Swarm(Stable).lua
Added windy and vicious, decided to release this script because it has all the features i wanted to implement in a bee swarm simulator GUI, all of em work as i expected, may, or may not add other stuff in the future, probably not, cause i'm likely to get banned for releasing this script, damn, Please onett don't ban me i didn't use hacks in your game 😭 🙏 , ok and so star catches is not, so accurate, it cathes like 1, 2 lights, if you are lucky it will catch all of em, and for the autofarm, useless piece of shit, i tried everything but couldn't get the token collector to work correctly, yes i know, this is practically a yandere dev code, and i don't really know a lot of LUA... IT'S DECENT OK? I TRIED MY BEST
ok fuck it not gonna store token as a string fuck you mc
Updates for: "For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" -- Post Brothers comics
rip_flavor_events
King Alfonso VI of Léon may now be recommended by his court phycisian to 'take it easy' outside the learning scenario (RIP.4010) You can now attempt to fall in love with a close relative of a friend who died, if you are both okay with incest (RIP.29020), as well as when you are trying to dance with a courtier while one-legged (RIP.29400) Event RIP.29801 and subsequent events can now fire for AI Eliminated RIP.30221 from 'Chess with Death'-event chain Restored event title for RIP.30301 (immortal rival found you)
Delete 035a - O LORD, I Love You, God, My Strength - Psalm 18
Allow effects to reference parents
Having each attack effect (e.g. slice, burn, shock, electrocute, poison) be its own process is a pain: what if an electric sword depends on a main effect (e.g. slice) connecting (successful hit roll) before it can affect the target? Plus, I want each application of an effect to be handled by its own process: I don't want a single one-per-weapon attack effect (e.g. poison) to be managing the effects of one poisoned sword on several different targets: if you hit something with a poisoned sword and the poison takes hold, then you have a process devoted solely to applying poison to that one target.
I was going to have one attack process configured to kick off multiple effects, but I think I'm going to have parent effects referenced by child effects. In that case I'll need each effect's properties in a dedicated process; however, if I want one process per affected target, then I need those effect processes to be prototype processes (or parents in another sense) where the "property holder process" spins off a copy of itself each time the effect is applied to a target. This means I need to have a prototype version of the effect process and an instance of the effect process.
So:
-
player initiates attack on target "T" with an attack object/process, e.g. sword "S"
-
Prototype cut effect "PC" sees an attack by it's parent attack "S" and creates an instance of itself "IC" and tells it to attack "T"
-
If IC manages to hit "T": a) Prototype poison effect "PP" sees that its parent effect, "IC", hits "T" and creates an instance of poison "IP" b) "IP" calculates its own hit roll on "T"
In this way, I could have effects that are children of effects which are also children of other effects, creating a cascade of effects based on a single attack.
Each effect can calculate its own hit roll, thus allowing any other nearby process to mitigate that hit or effect.
One thing that will be difficult, but is probably over-engineering at this point, is responding to a combination of effects: e.g. if I get poisoned and burned then my magic ring kicks in and nullifies one of them (edited)
(Actually, that would be easy: the magic ring can listen to effect events and turn it's logic on or off depending on its current state) (edited)
WIP: moving most logic from attack to effect
Moving the attacks over to various effects still makes sense; however, moving resource allocation doesn't: are there any effects of an attack that have different resources? If so, how do we coordinate them? If the "slice" effect of a sword has the resources it needs, does it wait for whatever resources "shock", "burn" or "poison" might need? (A creative person could always come up with a scenario where something like this could be possible). Having a parent effect wait for the resources required for a dpeendent effect sounds like over-engineering.
My go-to example is a flaming sword: it has a main effect of slicing (or whatever) but it has a secondary, dependent effect of burning (if you don't hit with the sword then you also don't burn with it.
Does the "burn" effect have it's own resources? Does it need to request resources every time it applies itself (if it's an on-going/repeating effect)?
I don't think so. The idea of an "attack" is that it's some kind of offensive action like a weapon, or a spell. We can't activate the offensive without the appropriate resources required to kick off any of the effects. So, the attack should wait for resources (e.g. stamina and focus for a sword) and then tell the effects to kick off. Dependent effects will wait for parent effects to hit first before kicking into action.
So, I'm going to move the resources back to the attack handler from the effect handler.
Having effects handle their own rolls to hit and calculate effect amount, and having the hits cascade (i.e. the dependency: if A hits then B might hit) still makes sense.
NOTE:
I need to remember that I'm ignoring time, CPU and transactionality: things don't have to happen exactly in order and groups of effects don't need to be atomic: if you manage to shock something twice when you've only hit it with a sword once, well good for you. Life is messy.
Rename a file that was missnamed before
fuck you licha
finally a responsive layout, omg i spent 6 hours on this i hate myself
cron: Fix clean-chrome-tmp.sh
Wasn't properly deleting all it should.
Changes it to assume if there isn't a socket in the dir, it's not in use.
Fuck you chrome.
ARROW-11289: [Rust][DataFusion] Implement GROUP BY support for Dictionary Encoded columns
This PR adds support for GROUP BY with for columns of Dictionary type.
The code basically just follows the pattern (aka is mostly copy/paste) from the take kernel: https://github.com/apache/arrow/blob/master/rust/arrow/src/compute/kernels/cast.rs#L294
I chose the "correct first, optimize later" approach here -- there are many ways to make this code faster, especially when grouping on string types.
It feels like a lot of copy/paste in my mind and I would love any thoughts / suggestion about refactoring out the recurring pattern of switch and dispatch for structured types (like Dictionary
and List
)
Closes #9233 from alamb/alamb/gby_dicts
Authored-by: Andrew Lamb [email protected] Signed-off-by: Andrew Lamb [email protected]
Implemented SubCommands and SubCommandGroups properly.
Details: To implement them I had to get creative. First thing i did was manually register a command that uses sub commands and sub command groups. Two things I noticed immediately:
- I can create a subcommand on a "root" command - where no SubCommandGroup is used
- The current implementation of the Interactions doesn't know what type of value an option is. Good thing is that there is only 1 option when querying subcommands and subcommand groups, so I can find out what the "path" of the subcommand is. TOP/root/rng TOP/root/usr/zero TOP/root/usr/johnny (i misspelled it in the source files, woops) [See SlashCommandsExample/DiscordClient.cs]
Next I wanted to make command groups (I'll use this term as to mean a slash command with subcommands and regular slash command groups) to be implemented in code in a sort of hierarchical manner - so I made them classes with attributes. Unfortunately to make this work I had to make them re-inherit the same things as the base module - UGLY but I see no other option to do this other than making them inherit from another class that remembers the instance of the upper class and implements the same methods aka a whole mess that I decided I won't want to partake in. [See SlashCommandsExample/Modules/DevModule.cs]
Next-up is to search for these sub-groups. I decided that the most intuitive way of implementing these was to make SlashModuleInfo have children and parent of the same type -- from which arose different problems, but we'll get to that. So I gave them some children and a parent and a reference to the CommandGroup attribute they have on themselves. The boolean isCommandGroup is unused, but could be useful in the future... maybe. Also I've added a path variable to internally store structure. I wanted (after the whole reflections business) for commands to be easly accessed and deal WITH NO REFLECTION because those are slow, so I changed the final string - SlashCommandInfo dictionary to containt paths instead of command infos, something like what I exemplefied above.
In any case, I edited the service helper (the search for modules method) to ignore command groups and only store top level commands. After that I made a command to instantiate command groups, and the command creation and registration were changed as to be recursive - because recurion is the simpest way to do this and it's efficient enough for what we want - we only run this once anyway.
The biggest change was with command building - commands no longer build themselves, but now we command each module to build itself. There are 3 cases: Top-Level commands Top-Level subcommands (or level 1 command group) subcommands within slash command groups
The code is uncommented, untidy and I'll fix that in a future commit. One last thing to note is that SlashCommands can have 0 options! - fixed that bug. Also SlashCommandBuilder.WithName() for some reason was implemented wrongly - I pressume a copy-paste error, Also I implemented 0 types of enforcing rules - I'm going to leave this to other people to do.
A lot of random changes. My commit comments suck and I will hate myself later for it.
Changed some Fmod shit
Alkstrand wanted me to push this tho there is basicly no diferance. My life is pain i am i perpetual torment help help help
your skin is freezing...
HERE, LET ME HELP YOU TAKE IT OFF
- fixed code being suck ass bad ugly