Skip to content

Latest commit

 

History

History
621 lines (481 loc) · 28.3 KB

2021-03-28.md

File metadata and controls

621 lines (481 loc) · 28.3 KB

< 2021-03-28 >

2,106,158 events, 1,219,930 push events, 1,682,586 commit messages, 97,163,794 characters

Sunday 2021-03-28 00:15:00 by 0neGal

fuck it i hate using labels

it's annoying as fuck to remember and shit...


Sunday 2021-03-28 00:35:22 by Jean-Paul R. Soucy

New data: 2021-03-27. See data notes.

Revise historical data: cases (MB), testing (PE).

Toronto's case number for today is inflated as it includes "approximately 260 cases from March 18 - March 24 that were not previously reported" due to technical issues

Note regarding deaths added in QC today: “8 new deaths, for a total of 10,645 deaths: 2 deaths in the last 24 hours, 5 deaths between March 20 and March 25, 1 death before March 20.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.

Recent changes:

2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:

  1. Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
  2. Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
  3. The file codebook.csv has been moved to the directory “other”.

We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.

  • 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.

Vaccine datasets:

  • 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
  • 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
  • 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
  • 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.

https://www.quebec.ca/en/health/health-issues/a-z/2019-coronavirus/situation-coronavirus-in-quebec/#c47900

Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.

SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.

For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.


Sunday 2021-03-28 01:49:48 by 0xlil14n

omg these styles are disgusting but HERE I AM IN ALL MY UGLINESS. and you still love me, right github????


Sunday 2021-03-28 02:00:01 by SofterStone

what fucking spaghetti you made to make it fail linter

holy shit


Sunday 2021-03-28 02:25:43 by Ted Brandon

Added a "Fuckery" option to bypass the annoying shit, if desired, as well as a few other small changes


Sunday 2021-03-28 12:41:59 by GBSoftwareLaptop

Itgil (Azbel and Roy helped)- changed stuff to make things work + pid shooting. 1. Added numbers to flyWheelVel graphs to have a fucking clue what's going on in FullyAutoThreeStage, StageThreePID, ThreeStageShoot, ShootBySimplePid and ShootByConstant. 2. Changed TestButtons in OI to work. 3. Made Robot TeleopInit to reset dome and turret so robot does not break (very bad if do). 4. Changed Turret reset function ResetEncoderWhenInSide to work so it resets the turret when microSwitch is true. 5. Changed Shooter PID values again, works kinda good now. 6. Changed secondStick in OI to secondJoystick because for god sake I hate stupid people.


Sunday 2021-03-28 13:04:37 by assemberist

Fuck you, format-string parsing!

I won't fix bugs that i'll have integrate with erlang term parsinng into c-part. Erlag side greetly separate strigs by '|' separator.


Sunday 2021-03-28 14:01:29 by GUA1-app

removed faulty t2witch cunt fuck my shit up dab dab bitches


Sunday 2021-03-28 16:31:09 by Sevak Mahesh R S

a few changes to start with

here are some of my changes to @daleharvey's original pacman code.

the changes can be broadly classified as:

  • make the first few levels very easy
  • be generous in handing out lives to the user
  • let the ghosts suffer
  • let the score increase like crazy - everyone loves a high score
  • later levels become super tough (very quickly ... so enjoy while the going is easy in the earlier levels). note that the tougher levels may cause frustration leading to the player breaking his/her computer - NOTE: this author is NOT responsible for such behavior on part of the user

all my changes are marked with "Mahesh" as far as I can remember. all changes have associated notes on why those changes were done. NOTE: i am NOT a js developer, so if you find my code is NOT upto your standards ... well ... deal with it! :)


Sunday 2021-03-28 16:49:49 by LAPTOP-5K218H50\Supercom - 1

++I can't remember anything ++Can't tell if this is while(true) or (false) ++Deep down inside I feel the scream ++This terrible semicolon stops in me ++Now that the code is through with me ++I'm waking up, I cannot see ++That there is not much left of me ++Nothing is real but glitch now ++Hold my breath as I wish for ref ++Oh please, God, wake me ++Back to the room that's much too real ++In code' life that I must feel ++But can't look forward to reveal ++Look to the time when I'll code ++Fed through the grab' that comes to me ++Just like lead software engineer novelty ++Tied to machines that make me be ++Cut this **** off from me ++Hold my breath as I wish for ref ++Oh please, God, wake me ++Now the idea is gone, I'm just one ++Oh God, help me ++Hold my breath as I wish for ref ++Oh please, God, help me ++Statement, imprisoning me ++All that I see ++Absolute horror ++I cannot think ++I cannot type ++Trapped in myself ++Body my holding cell ++VStudio has taken my sight ++Taken my speech ++Taken my hearing ++Taken my arms ++Taken my legs ++Taken my soul ++Left me with life codin' hell


Sunday 2021-03-28 17:22:48 by Acensti

More Blackmarket TGUI updates. (#1072)

  • More fucking Blackmarket TGUI fixes god i hate myself

title

  • fix

morefix


Sunday 2021-03-28 19:24:04 by Kylerace

emissive blockers are now just an overlay (with kickass fucking graphs). FUCK YOU MAPTICK (#57934)

currently emissive blockers are objects added to the vis_contents of every item. however mutable appearances as overlays can serve this role perfectly well and according to my tests should cause less maptick per item. for mobs mutable appearances dont work for reasons i dont understand so instead this adds the em_block object to overlays instead of vis_contents. these both use atom/movable/update_icon() now


Sunday 2021-03-28 19:44:10 by MilkyNail (MariaMod)

Add files via upload

  • (by Middlewared) Now you will see a display of the comfort level of the main character in the wardrobe. It depends on the current underwear and your Corruption
  • (by Middlewared, Rachael) Added reactions of relatives on your nudity
  • (by Middlewared, Rachael) Added reactions (of relatives, other people) on your current clothes and erotic accessories
  • (by Middlewared, me) Added new items: lacy bra, lacy panties, thong, crotchless panties. Already added, but not implemented yet (you won't see these items): handcuffs and yoke, chastity bra, chastity belt, nipple chain, corset
  • Fixed pics in the scene where you are mating with Ralf in the room
  • Fixed the detention gifs (female version)
  • (by Rachael, me) Added a Cleaning sister's room scene. To see it, you must have 15 points of Corruption and your sister more than 30. There are vaginal and oral endings
  • Changed colors for the main character's dialogue box. I hope it's more comfortable for the eyes now
  • Improved the Cleaning brother's room and Brother is training scenes a bit
  • Fixed style of sister's dialogue boxes here and there
  • Put brother\sister paragraphs in a new order. Now brother scenes are on top of sister ones. Players won't notice any changes
  • (by Rachael, me) Added three small scenes with sister: Sister exercising, Sister playing PS4, Sister reading a magazine
  • (by Rachael, me) Added the ability to chat with sister (when she is exercising). All topics are similar to brother's chat paragraph
  • A cosmetic change in the sibling schedule
  • Fixed one huge bug. Now sister character displays correctly

Sunday 2021-03-28 19:59:29 by Aakanksha05

Update README.md

Contents: Note about plagiarism Problem Definition Data Analysis Question Answers References Assignment todo list Plagiarism Students may discuss with each other for information sharing regarding solutions, tools, and techniques to approach the assignment. However, they should implement their own code from scratch. Students found violating general code of conduct might be penalized by reducing their scores.

These assignments are for learning and should be the mutual goal of everybody, teachers, and students alike.

Problem Definition Access an open source dataset “Titanic”. Apply pre-processing techniques on the raw dataset.

Data analysis Data Acquisition Data cleaning/transforms Data Visualization Feature Engineering Dimensionality Reduction/Feature Selection Question Answers What is Data? Can be transformed What should be the data format? Structured Tabular Well-defined How do we acquire it? Sensors People counter Cookies data Surveys Government Private Transactional records What is data cleaning? Processing to more readable format Identifying outliers Usability Why to do data cleaning? Formatting Data-type - validating data What is feature engineering? Data aggregation Derived features Crossed features Fourier features Why to do feature engineering? What is visualization? What are the various types of data? Textual Categorical Numerical What are the techniques to handle different kinds of data? Textualq Removing stop-words, punctuation Stemming Embedding pronoun-noun- entity recognition Categorical Nominal Ordinal Boolean Numerical Change scale Normalize Standardize Robust Change distribution Power Quantile Discretize Engineer Polynomial Data imputation Fill Nan's or nulls Textual Categorical Numerical Once data is enriched, what to do next? What are the different varieties of plots? Line Bar Stacked Non-stacked Histogram Box and whisker Scatter Pie chart Wind-rose Correlation and partial correlation Many more - (list what interests you with examples) What are Feature Selection techniques? Matrix Factorization PCA SVD References Data Preparation techniques List of books for data processing Data Visualization - python matplotlib Data preparation List of Visualization plots RandomForestClassifier model Assignment ToDo List Notes:

Take it as a challenge to go beyond boudaries of the assignment Apply all that you can. Lookout for improvisations. Students will be scored based on efforts taken. (Copying is strictly prohibited and will be treated severely.) As a study material for further projects and your understanding, elaborately add more points to this file. You can also make a GitHub project and use this first draft for continuous updates. Ideally use Python3.7.x, Pandas, Matplotlib, Seaborn, Sklearn, etc. Optionally use R, Matlab, etc. ToDo List:

Download the dataset.

Apply the relevant data processing techniques

Remember to do the analysis separately for test and train data. Ideally prepare a pipeline for data processing Input - single data instance Output - Transformed instances Plot various visualizations

Make it generic - Reusable for the last assignment Think and answer a few data scientific questions (interesting and insightful) using data analysis?

How many of the survived were male, female? Within this, how many were children in each gender category? What does the SibSp, Parch, Cabin, Embark column signify? Can we attach external datasets to enrich the information? Is Fare just a number or is correlated with other columns? If yes, which ones? Students can add more such questions. Advanced (assignment 2): Fit a model like LogisticRegression and RandomForestRegressor using sklearn.


Sunday 2021-03-28 20:06:10 by George Spelvin

lib/list_sort: optimize number of calls to comparison function

CONFIG_RETPOLINE has severely degraded indirect function call performance, so it's worth putting some effort into reducing the number of times cmp() is called.

This patch avoids badly unbalanced merges on unlucky input sizes. It slightly increases the code size, but saves an average of 0.2*n calls to cmp().

x86-64 code size 739 -> 803 bytes (+64)

Unfortunately, there's not a lot of low-hanging fruit in a merge sort; it already performs only nlog2(n) - Kn + O(1) compares. The leading coefficient is already at the theoretical limit (log2(n!) corresponds to K=1.4427), so we're fighting over the linear term, and the best mergesort can do is K=1.2645, achieved when n is a power of 2.

The differences between mergesort variants appear when n is not a power of 2; K is a function of the fractional part of log2(n). Top-down mergesort does best of all, achieving a minimum K=1.2408, and an average (over all sizes) K=1.248. However, that requires knowing the number of entries to be sorted ahead of time, and making a full pass over the input to count it conflicts with a second performance goal, which is cache blocking.

Obviously, we have to read the entire list into L1 cache at some point, and performance is best if it fits. But if it doesn't fit, each full pass over the input causes a cache miss per element, which is undesirable.

While textbooks explain bottom-up mergesort as a succession of merging passes, practical implementations do merging in depth-first order: as soon as two lists of the same size are available, they are merged. This allows as many merge passes as possible to fit into L1; only the final few merges force cache misses.

This cache-friendly depth-first merge order depends on us merging the beginning of the input as much as possible before we've even seen the end of the input (and thus know its size).

The simple eager merge pattern causes bad performance when n is just over a power of 2. If n=1028, the final merge is between 1024- and 4-element lists, which is wasteful of comparisons. (This is actually worse on average than n=1025, because a 1204:1 merge will, on average, end after 512 compares, while 1024:4 will walk 4/5 of the list.)

Because of this, bottom-up mergesort achieves K < 0.5 for such sizes, and has an average (over all sizes) K of around 1. (My experiments show K=1.01, while theory predicts K=0.965.)

There are "worst-case optimal" variants of bottom-up mergesort which avoid this bad performance, but the algorithms given in the literature, such as queue-mergesort and boustrodephonic mergesort, depend on the breadth-first multi-pass structure that we are trying to avoid.

This implementation is as eager as possible while ensuring that all merge passes are at worst 1:2 unbalanced. This achieves the same average K=1.207 as queue-mergesort, which is 0.2n better then bottom-up, and only 0.04n behind top-down mergesort.

Specifically, defers merging two lists of size 2^k until it is known that there are 2^k additional inputs following. This ensures that the final uneven merges triggered by reaching the end of the input will be at worst 2:1. This will avoid cache misses as long as 3*2^k elements fit into the cache.

(I confess to being more than a little bit proud of how clean this code turned out. It took a lot of thinking, but the resultant inner loop is very simple and efficient.)

Refs: Bottom-up Mergesort: A Detailed Analysis Wolfgang Panny, Helmut Prodinger Algorithmica 14(4):340--354, October 1995 https://doi.org/10.1007/BF01294131 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.5260

The cost distribution of queue-mergesort, optimal mergesorts, and power-of-two rules Wei-Mei Chen, Hsien-Kuei Hwang, Gen-Huey Chen Journal of Algorithms 30(2); Pages 423--448, February 1999 https://doi.org/10.1006/jagm.1998.0986 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.5380

Queue-Mergesort Mordecai J. Golin, Robert Sedgewick Information Processing Letters, 48(5):253--259, 10 December 1993 https://doi.org/10.1016/0020-0190(93)90088-q https://sci-hub.tw/10.1016/0020-0190(93)90088-Q

Feedback from Rasmus Villemoes [email protected].

Change-Id: Ic7e916ce59b2f6116c20c6c34887ee17f269c154 Link: http://lkml.kernel.org/r/fd560853cc4dca0d0f02184ffa888b4c1be89abc.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Daniel Wagner [email protected] Cc: Dave Chinner [email protected] Cc: Don Mullis [email protected] Cc: Geert Uytterhoeven [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Sunday 2021-03-28 22:24:05 by Naughtyusername

fuck shit fuck should work now fuck i hate this fuck


Sunday 2021-03-28 23:01:13 by geocrex

FUCKING FINALLY (TEXT ALIGN LEFT FINALLY WORKS UPDATE)

THIS HAS BEEN SUCH A BURDEN AND I'VE JUST FIGURED OUT THAT FOR WHATEVER REASON I NEED TWO SEPERATE DIVS JUST TO TEXT ALIGN LEFT AND NOT BREAK EVERYTHING ELSE I THOUGHT MY CODE FOR THIS PAGE WAS GENUINELY CURSED OR SOMETHING HTML IS DUMB

also i added six
at the bottom of the page too lol

also i added a newgrounds link in too at the same time in the same update too why not lol

also social media links moved to the top instead

also "---" replaced with just simply
now too

also nvm, removed the "

date format: day/month/year

" i think it's pretty obvious and easy to figure out actually.

this is genuinely a whole changelog woah


Sunday 2021-03-28 23:57:17 by David Blue

...ALSO trying to remember how to quickly work gh pages

NOT MY SHIT

@heisenburger/night

https://heisenburger.github.io/night/


Sunday 2021-03-28 23:58:48 by root

sched/core: Fix ttwu() race

Paul reported rcutorture occasionally hitting a NULL deref:

sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM

Debugging showed that this only appears to happen when we take the new code-path from commit:

2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")

and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:

c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")

this would've unconditionally hit:

smp_cond_load_acquire(&p->on_cpu, !VAL);

and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.

The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.

Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.

The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):

				X->cpu = 1
				rq(1)->curr = X

CPU0				CPU1				CPU2

				// switch away from X
				LOCK rq(1)->lock
				smp_mb__after_spinlock
				dequeue_task(X)
				  X->on_rq = 9
				switch_to(Z)
				  X->on_cpu = 0
				UNLOCK rq(1)->lock

								// migrate X to cpu 0
								LOCK rq(1)->lock
								dequeue_task(X)
								set_task_cpu(X, 0)
								  X->cpu = 0
								UNLOCK rq(1)->lock

								LOCK rq(0)->lock
								enqueue_task(X)
								  X->on_rq = 1
								UNLOCK rq(0)->lock

// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
  X->on_cpu = 1
UNLOCK rq(0)->lock

// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb();			// wake X
				ttwu()
				  LOCK X->pi_lock
				  smp_mb__after_spinlock

				  if (p->state)

				  cpu = X->cpu; // =? 1

				  smp_rmb()

// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
  X->on_rq = 0

				  if (p->on_rq)

				  smp_rmb();

				  if (p->on_cpu && ttwu_queue_wakelist(..)) [*]

				  smp_cond_load_acquire(&p->on_cpu, !VAL)

				  cpu = select_task_rq(X, X->wake_cpu, ...)
				  if (X->cpu != cpu)
switch_to(Y)
  X->on_cpu = 0
UNLOCK rq(0)->lock

However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.

(Most of the previous ttwu() races were found on very large PowerPC)

Nevertheless, this fully explains the observed failure case.

Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected]


< 2021-03-28 >