3,295,273 events, 1,712,399 push events, 2,708,816 commit messages, 196,685,538 characters
Created Text For URL [www.dispatchlive.co.za/news/2021-04-05-dying-boyfriend-confesses-to-dumping-body-of-girlfriends-daughter-in-river/]
i break shit and you fix it bitch
thats how life be holmes
add unit testing framework and add to project
Additions
-
Added
unit_tests
module. It is very hacky at the moment. It is currently just a module in thesrc/
folder. In my opinion, had I structured mymakefile
correctly, the file would likely be within thetests/
folder, but it's insrc/
despite not being an actual source file. I may fix this someday, but at thei point I just want unit tests. -
Added to the
test
recipe in themakefile
. Rather than go and delegate to arun_tests.sh
script, I chose to do it directly in themakefile
. Kind of weird, but seeing how I specified themakefile
SHELL
asbash
, the&&
and||
operators should work perfectly fine. It will rely on the return code of the unit test functions. -
Added testing option to the mainline where if you run the binary as ./TISC test, it will override the usual file reading process and instead run the unit tests. Following that, it will exit. I think this is pretty hacky (and will probably bug the users who decide to name an input file `test'. Shame on them for now.
Changes
- Renamed the original tests to
E2E tests'. In my opinion, these tests test the entire TISC program flow, though calling them
integration tests' wouldn't be a bad idea.
Removals
- Removed some newlines from the test flows.
New data: 2021-04-05. See data notes.
Revise historical data: cases (AB, BC, MB, ON); testing (AB, ON).
Today’s case numbers for AB will include cases all for the past 4 days since we only got health region-level information today. Thus, the time series will be adjusted such that the health region-level totals match up today.
Today’s case numbers for BC will total 104,920 whereas the province is reporting a total of 104,061. This is based on adding up the daily numbers from the press releases given in the past 3 days. Since they are not updating their CSV or dashboard until tomorrow, these case numbers may change in the future.
Note regarding deaths added in QC today: “4 new deaths, for a total of 10,697 deaths: 1 death in the last 24 hours, 1 death between March 29 and April 3, 1 death before March 29, 1 death at an unknown date.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
Level : Easy
Problem Rash is known about his love for racing sports. He is an avid Formula 1 fan. He went to watch this year's Indian Grand Prix at New Delhi. He noticed that one segment of the circuit was a long straight road. It was impossible for a car to overtake other cars on this segment. Therefore, a car had to lower down its speed if there was a slower car in front of it. While watching the race, Rash started to wonder how many cars were moving at their maximum speed. Formally, you're given the maximum speed of N cars in the order they entered the long straight segment of the circuit. Each car will prefers to move at its maximum speed. If that's not possible because of the front car being slow, it might have to lower its speed. It still moves at the fastest possible speed while avoiding any collisions. For the purpose of this problem, you can assume that the straight segment is infinitely long. Count the number of cars which were moving at their maximum speed on the straight segment.
Input
The first line of the input contains a single integer T denoting the number of test cases to follow. Description of each test case contains 2 lines. The first of these lines contain a single integer N, the number of cars. The second line contains N space separated integers, denoting the maximum speed of the cars in the order they entered the long straight segment.
Output
For each test case, output a single line containing the number of cars which were moving at their maximum speed on the segment.
25 days later, i realize my javascript SUCKED. took some time to give it love.
quick sort is working now, holy crap that was so hard and ridiculous. Must be bc it's 4am and brain isn't at 100% but damn all online explanations were trash they missed a super key piece of this entire thing smh figured it out on my own
events2: add "path" information
This is mainly a preparation step for the prometheus plugin, which will benefit from the IP information we then can expose.
While it looks weird that the get_path_updated() always returns None, this is fine. We will internally split plugins into "change_plugins" (current ones) and "events_plugins" (prometheus) soon. For now we only have change_plugins and there we don't want to trigger updates for path changes.
While at it refactor the Resource methods. This is new cleaner/easier reusable by not handing in a type for some getters, but the actual information (like a volume ID, or a peer ID). This still still is ugly and will need some more love (e.g., replacing lists with maps,...).
Add ReferenceIterator & repository->references()
Fuckin iterators man, giving me loads of trouble.
Also it sucks alternative constructors aren't a proper thing
Revert "raphael-sepolicy: Label audio_hal.in_period_size"
- fuck you xiaomi
This reverts commit 3d9d7a43425c0bc5562bd29328659907a60d77f1.
Create kyu7-paulsmisery.js
Paul is an excellent coder and sits high on the CW leaderboard. He solves kata like a banshee but would also like to lead a normal life, with other activities. But he just can't stop solving all the kata!!
Given an array (x) you need to calculate the Paul Misery Score. The values are worth the following points:
kata = 5 Petes kata = 10 life = 0 eating = 1 The Misery Score is the total points gained from the array. Once you have the total, return as follows:
<40 = 'Super happy!' <70 >=40 = 'Happy!' <100 >=70 = 'Sad!' =100 = 'Miserable!'
feat(npm-scripts): allow format/lint to run without package.json
As noted in the doc comment, yarn run liferay-npm-scripts
will cd
to
the closest project root (ie. directory that has a package.json
file)
before invoking our tool. By that time, it is "too late", to know where
we were invoked from. Fortunately, it sets an INIT_CWD
environment
variable with this information.
Whenever we run the lint()
or format()
functions (whether in check
or fix
modes), we check to see if INIT_CWD
is defined and different
from our apparent "current" working directory. If there is a
discrepancy, that means that the user is probably working down inside a
src/
directory (or similar), or they are running in a project that
doesn't have a package.json
(eg. click-to-chat-web
). Nevertheless,
projects like click-to-chat-web
do have JSP in them even though they
don't have a package.json
, so it is reasonable to want to format them
with our frontend formatting tools.
Now, prior to this change, running yarn run format
in a place like
click-to-chat-web
would result in yarn
starting from the "top" level
(modules/
) directory and running formatting for the entire repo, which
would work but is slow. Running from a subdirectory inside a project
with a package.json
(like frontend-js-react-web
) wasn't quite as
bad, because yarn
would cd
up to the project root where the
package.json
lives and run from there; our default globs would take
effect (which have the form "/src/**/*.jsp" etc).
After this change, if we detect that INIT_CWD
is set and different,
that means we are "down" somewhere where there is no package.json
. We
walk up the filesystem looking for a build.gradle
or a package.json
,
whichever we see first, and we run from there. This means that one of
three scenarios may apply:
- Inside
click-to-chat-web
you get exactly what you would want and the JSP files get checked/formatted. - Inside a subdirectory of
frontend-js-react-web
we behave as though we'd run from the project root. - In a place like
modules/apps/frontend-js
, which does have abuild.gradle
, we stop there and run with the default globs (eg. "/src/**" etc) which effectively means we format nothing (previously we would format everything, which is slow).
So what we have is strictly an improvement over what we had, but we should note these "gotchas":
gradlew formatSource
will not invokeliferay-npm-scripts
in modules withoutpackage.json
files. This is something that we could ask Hugo to change if we cared. I suspect we don't want to do that though because it could have unpleasant side-effects (eg. on an older branch not using this version of@liferay/npm-scripts
, it would fall back to the old behavior of running frontend formatting across the entire repo just because somebody rangradlew formatSource
; not good because it is too slow).- I guess you could run this from "crazy" places which have
build.gradle
files but aren't really what we'd consider to be "projects" (I'm thinking of places under "third-party/" or "util/" or "sdk/" for example). But if you do that with an explicityarn run format
you probably are asking for it, and unintended formatting won't happen anyway unless your directory has "src/**" stuff in it. On the off chance that you do mess something up, our Java Source Formatter has a reasonable chance of catching anything egregious duringci:test:sf
, and our formatting changes are supposed to be relatively benign in any case. Overall, I think it's worth it.
Test plan:
- Copy over these changes to a liferay-portal checkout and add
some
console.log()
to see the current working directory, globs, and paths after glob expansion. Run under all the scenarios listed above, plus from themodules/
root and see that it continues working as before. - Run unit tests to make sure we didn't break anything. Note the hacky
setting of
INIT_CWD
necessary to stop ourformat()
tests fromcd
-ing out of where we want them to actually run.
Closes: liferay/liferay-frontend-projects#483
Fix 2tap2wake after Ambient Pulsing on some devices
like taimen and walleye, instead sunfish (and probably newer pixels) doesn't need this
To apply, override the config_has_weird_dt_sensor bool in the device tree
Change-Id: I03bc7fc52874252fc897413d44b27ebf06ee148a
TL;DR for some reason, on taimen and walleye, after ambient pulsing gets triggered by adb with the official "com.android.systemui.doze.pulse" intent or by our custom "wake to ambient" features, the double tap sensor dies if you follow this steps:
- screen is OFF
- trigger ambient pulsing with a double tap to wake (if custom wake to ambient feature is enabled), or the official intent by adb, or with music ticker or any other event
- after ambient display shows up, don't touch anything and wait till the screen goes OFF again
- double tap to wake, again
- the double tap sensor doesn't work at all and device doesn't wake up
Now, funny thing, after the steps above, if you cover then uncover the proximity/brightness sensor with the hand, then double tap to wake again, the wake gesture works as expected.
When covering/uncovering the proximity/brightness sensor, this happens: 11-10 22:02:00.916 967 998 I ASH : @ 1993.460: ftm4_disable_sensor: disabling sensor [double-tap] 11-10 22:02:02.013 967 998 I ASH : @ 1994.556: ftm4_enable_sensor: enabling sensor [double-tap]
When you switch screen ON with power button, the doze screen states do the same: the sensor gets disabled then enabled again if device goes to DOZE idle state.
Instead, after Ambient pulsing, when the pulsing finishes, the sensor is still enabled, so the disable/enable event doesn't happen this time. And that's why, for some reason, it doesn't respond anymore.
So, in a nutshell: i've no idea why this sh#t happens lol, but with a super lazy hacky tricky dirty bloody nooby line change, we can force the sensor disable/enable event when the device goes to DOZE state.
Change-Id: I8ce463a6e435e540e3ca93336c5dba7a95771b56
By my count this makes at least twice now. While these idiots drive by doing the same things. God life didnt used to suck this bad, why are these people so fucking boring ?
This code would have been done a long time ago when I offered to let it be pointed to by government sites once in its FINISHED, not its present VERY raw form !
Make specialisation a bit more aggressive
The patch
commit c43c981705ec33da92a9ce91eb90f2ecf00be9fe
Author: Simon Peyton Jones <[email protected]>
Date: Fri Oct 23 16:15:51 2009 +0000
Fix Trac #3591: very tricky specialiser bug
fixed a nasty specialisation bug /for DFuns/. Eight years later, this patch
commit 2b74bd9d8b4c6b20f3e8d9ada12e7db645cc3c19
Author: Simon Peyton Jones <[email protected]>
Date: Wed Jun 7 12:03:51 2017 +0100
Stop the specialiser generating loopy code
extended it to work for /imported/ DFuns. But in the process we lost the fact that it was needed only for DFuns! As a result we started silently losing useful specialisation for non-DFuns. But there was no regression test to spot the lossage.
Then, nearly four years later, Andreas filed #19599, which showed the lossage in high relief. This patch restores the DFun test, and adds Note [Avoiding loops (non-DFuns)] to explain why.
This is undoubtedly a very tricky corner of the specialiser, and one where I would love to have a more solid argument, even a paper! But meanwhile I think this fixes the lost specialisations without introducing any new loops.
I have two regression tests, T19599 and T19599a, so I hope we'll know if we lose them again in the future.
Vanishingly small effect on nofib.
A couple of compile-time benchmarks improve T9872a(normal) ghc/alloc 1660559328.0 1643827784.0 -1.0% GOOD T9872c(normal) ghc/alloc 1691359152.0 1672879384.0 -1.1% GOOD Many others wiggled around a bit.
Metric Decrease: T9872a T9872c
Added: "Chatbubble"
Fuck you and have a nice day! :)
Added RegForm. Four days of paperwork, internet poking around, educational video recapitulation ... and my brain is like a fucking ass. But I did not write a single line of code myself, all these are quotes ... Apparently my business is bad.
UTTER RAT SHIT GUN WITH LEGS
LIKE HOLY FUCK WHY DID THIS TAKE SO LONG