2,899,376 events, 1,485,894 push events, 2,312,242 commit messages, 174,223,680 characters
New data: 2021-03-04: See data notes.
Revise historical data: cases (AB, MB, SK).
Note regarding deaths added in QC today: “20 new deaths, but the total of deaths amounts to 10,455, due to the withdrawal of 1 death not attributable to COVID-19: 4 deaths in the last 24 hours, 7 deaths between February 25 and March 2, 9 deaths before February 25.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
OMG modified CNN 4 days before delivery
Unbelievable, dis is gonna be gud
(Forgive me it's 4 AM and I wrote the paper the entire day and night)
Dominion Leaders Fixed
Fuck you Pcola this is how it was always meant to be
90% sure this doesn't work, but I'm going out-of-town and I want to get this version on my laptop and this is the easiest way. Don't judge me, alright? I might use version control as a synchronization method because I'm lazy, you might use it for actual work reasons, we're the same, bro. The same. DON't judge my life choices. I'll use Git how I want. If I want to commit every time I enter a character for maximum rollback flexibility, I fucking will. There's nothing you can do to stop me.
freezers can't be build on same layer pipes (#57278)
Fixes #51993 (A hell issue to do with shitty behavior when two pipenets connect to the same pump, I hate players -Lemon) Thermomachines can't be built on the same pipe layer of existing pipes, this still allows piping on other layers under the machine
THE SHOW GOES ON. The memes are dank, the queue is slappin, there is food and clean water, life is good. I am on a whole nother level. Girl he only fucked you over cause you let him. Do all i can to show you youre special.
Do some sound and music work. Rainbow Factory seems to be ContentID'd alot... which is a bloody shame. I flippin' hate YouTube's ContentID system.
org: Remove evil-org
I dislike its autoformatting, it can get wonky, e.g. thinking 2 separate lists are the same list, and then it becomes painful to correct it
Hire specutils/specreduce developer
The Moore budget earmarks 22500 $ specifically for spectroscopy development. I submit this proposal to ensure that there is at least one spectroscopy proposal,and promise to oversee the work to the best of my abilities, but I'm happy to recind it if more detailed proposals are submitted.
Spectroscopy is a weak spot for astropy, both specutils, and even more specreduce, need love. This proposal is to hire a developer with experience in spectroscopy from the astropy community to drive development forward.
Create 人CRYPTO人
A .scpt file for running in AppleScript that calls on Safari to navigate to an online market price aggregator to pull real-time prices of cryptocurrencies using JavaScript. The script then opens Excel and writes the data to a master workbook for collecting historical information on the stock prices and is used to create trend lines and forecasts with the given data. On my local machine I then used Automator to schedule an alarm that calls the script without user input to run every 15 minutes and then the script closes out Safari and Excel. I used Automator instead of another program because I am learning coding as I go and couldn't figure out how to get the idle handle to run in an Application version of the file. It seems there are too many functions in the 1 file for the idle handle to run uninterrupted in an always running Application version. Currently there is a bug with the script that won't return the month value correctly in Excel that is bothering me and I haven't figured out how to remedy that yet. Working the bugs out as I find them and continuing on the Excel master file with aggregating what I can of the 8,697 cryptocurrencies there are available to buy stock in on https://coinmarketcap.com. I'm not sure if I am going to share the Excel data I collect yet as I spent a fair amount of time and money learning the techniques for assembling the data and generating mathematical models of the price trends. If anyone is interested in the project and the historical and current data I am collecting I would be interested in sharing it for compensation for the time it takes to compile it. The AppleScript coding was learned in my spare time using free resources but the techniques used for forecasting were learned at university, Microsoft Excel is a yearly investment, and my shiny Apple MacBook cost me more than a fancy date in Nevada. I would like to receive some return on that investment before just uploading it all. After all, the information was free to collect and if one is really that interested in the data one could invest time and energy to find their own crystal ball to the crypto-market. Plus I wouldn't want to insinuate that my forecasts are all that amazing to behold. My current progress shows that with the volatility of Bitcoin at least that the models are fairly weak over the long term. I could focus on a trending section of the data but that takes all the confidence level out of forecasting in the first place. Additional work needs to be done collecting market prices and socio-economic data to generate cross-correlation between indicators to find a trends that holds strong and reliable over time to put any confidence level in what appears to be a pretty random free-for-all market bonanza with Bitcoin. I think the best I can hope to do with the script is keep a watchful eye over time, build a familiarity with the data by modeling when I'm free to do it, and possibly taking advantage of a market spike in Bitcoin that would allow me to invest at a better initial loss. With the script running continuously I could maybe figure out a way to incorporate modeling cross-currency stock trading instead of buying coin outright; there's at least less security risk with dealing with a middle-man. I have a lot to learn about stock trading before I would feel comfortable investing anything but pocket lint in the variability and chance I see in Bitcoin currently, though. Maybe additional research into the market as a whole will reveal the ideal time and stock to enter the game. Again, feel free to use the AppleScript as you wish for your own personal knowledge. It was cool to learn how to automate something and it is still interesting to pick away at for the time being at least. I don't think I will continuously upload unless I see an interest in the content. And this project won't become any more professional over time I'm pretty sure so don't expect me to jump through any real hoops unless there is some kind of real return for the work.
bazel: add way to run lint unit tests with bazel run
I tried to get these tests working in the Bazel sandbox (for posterity:
https://github.com/rickystewart/cockroach/tree/lint_take2), but it's
pretty difficult IMO -- I don't really think it's feasible, and if it
can be done, we'll have to resort to a bunch of ugly hacks that I think
will end up being pretty troublesome for remote execution. So just mark
these tests as decisively not working in Bazel, and provide an alternate
way to run them. Notably, this will mean that they can't be in the
normal unit test suites, can't be cached, etc. Over time we'll have more
than a few tests that can't just be run with bazel test
, so that's not
necessarily concerning.
We implement this as a series of thin sh_binary
rules that wrap the
test targets proper. The shell scripts set an appropriate PATH
and
GOROOT
, cd
to a subdirectory of $BUILD_WORKSPACE_DIRECTORY
, and
then just run the test binary with the passed-in arguments, as in:
bazel run //build/bazelutil:fmtsafe_lint_tester -- -test.v
Also remove the broken_in_bazel
tag from these tests and replace them
with a bespoke lint
tag (they're not "broken", they just need to be
run in this special way from now on -- this will help distinguish the
tests that are really still broken), and deleted some useless files that
Gazelle generated at some point in the past but don't serve a purpose.
Next steps from here will include running the linter itself from a Bazel build.
Release justification: Non-production code change Release note: None
Try using newer material ui core version. Why is this stupid npm shit so fucking hard and idiotic?
i said we fucking pogging today
i love the city lights but you never looked so bright, oh this is all i wanna know tell me you feel alright
Build 1.0.0.122
Fix: Missing U48 item icons and "The Changestone" augments added. These were done for the last release, but I messed up with my source control and failed to include them. Fix: "Legendary Tinkerer's Goggles" now correctly gives 15% melee alacrity (not 10) (Reported by Kamdragon) Fix: "Legendary Enforcer's Coat" now has the correct max dex bonus of 10 (not 12) (Reported by Hoidx) Fix: The "Haste" spells Melee Alacrity no longer stacks with other alacrity sources (Reported by Kamdragon) Fix: "Vistani: Rapid Attack" will now show inactive entries for Doublstrike and Doubleshot Fix: Spell "Angelskin" updated to match wiki description (Reported by Hoidx) New: New DR types of Fire and Force added to support new weapons New: Ruby of 3d6/4d6/5d6/6d6/7d6 and 8d6 are now available New: The drop list height displayed during augment selection for items has been increased for better selection New: Augment tooltips during item setup now display the augments minimum level Fix: The helmet "Compliance" now has the correct effects list (Reported by krizinja) Fix: "The Heart of Suulomades" now has its Resistance +18 buff (Reported by krizinja) Fix: Item "Legendary Fabricator's Bracers" can now correctly select +21 Constitution as one of its upgrade augments (Reported by krizinja) Fix: "Hyrsam's Fiddle" now has its missing "Eminence of Winter" set bonus and its missing "Spellcasting Implement" bonus (Reported by krizinja) Fix: The feat "Magical Training" now grants proficiency with orbs
New Items: ---Celestial Insignia (Necklace - feytwisted autumn) ---The Legendary Queen's Sceptre (Club) ---Staff of the Winter Solstice (Quarterstaff - feytwisted) ---The Obsidian Reaver (Dwarven War Axe - feytwisted) ---The Magamatic Cleaver (Great Axe - feytwisted) ---The Bitterstar (Morningstar - feytwisted) ---The Bitter Blade (Bastard Sword - feytwisted) ---Mirana of the Flame (Falchion - feytwisted) ---Rauven of the Frost (Longsword - feytwisted) ---Eidolon of the Shadow (Great Sword - feytwisted) Set Bonus Images Harvested: ---Pain and Suffering
I got a new keyboard
It FEELS SO GOOD my god. So quiet, so premium, so heavy. I love it.
Finally passes, but seems like a bit of a hack
Many changes to make it work more like the actual hardware
- Use len instead of len+1
- Use dmcen flag instead of dmcsilent
- The sample byte buffer can be in one of 3 states: 0: no sample byte has been loaded 1: a sample byte is loaded and is currently playing 2: a sample byte is loaded but is not yet playing
Some interesting notes from analyzing Visual2A03:
- Nearly everything seems to be synchronized to the APU tick
- But writing to $4015 bit 4 enables PCM immediately
- Fetch only occurs if the sample buffer is empty. If you start the channel and a sample is already available, it will continue to play that sample before fetching the next one.
- Timer is 9 bit LFSR with taps at bits 4,8. It resets at $100. When it overflows, pcm_bit is incremented, pcm_sr is shifted, and frequency is reloaded (from $4010)
- When pcm_bit overflows from 7->0, then a memory fetch is triggered as long as the DMC channel is enabled
- When the fetch finishes the length counter is checked, if it is non-zero then it is decremented. Otherwise, the dmc_en flag will be cleared if the loop flag is not set. In either case the internal pcm_lc/pcm_a are reloaded.
apu_tests/7-dmc_basics #19 gave me a lot of trouble (as did others, it seems, from reading forum.nesdev.com).
The test is a bit confusing: it starts a 17-byte sample (by loading 1 into $4013), then waits for the DMC enable flag to be cleared (line 174). Note that when the flag is cleared, the last fetched byte is still being played. It then waits for 4 more "dmc_cycles" + 30 cpu ticks. By this point the sample has long since finished. It then starts another 1-byte sample (i.e. loading 0 into $4013, see line 178). There is no currently playing sample, and the sample buffer is empty, so a byte will be fetched. However the fetched byte won't be loaded and start playing until pcm_bits rolls over to 0. The test is written so this happens immediately after the dmc_en flag is cleared. (The dmc_en flag is cleared because pcm_lc is 0 after the fetch.)
At this point the dmc_en flag should be clear, but the fetched sample will be playing. The test checks for this (see line 182). The test then starts another 1-byte sample (line 183), which sets the dmc_en flag. However, because the current sample is still playing, this sample byte is not yet fetched. After the byte fetched at line 178 is finished, this next byte is fetched, dmc_en is cleared, and the final byte continues to play. The test waits quite a bit longer (4 "dmc_cycles"), and makes sure that the dmc_en flag is clear.
The tricky part (for me), was realizing that the sample buffer needed 3 states. If you just implement it with 2 states (has/doesn't have byte) then it doesn't work properly. The "has sample" flag should be set as long as there is a sample byte playing. But if that's true then we can't tell whether we should clear the "has sample" flag when copying the sample byte into the shift register. The way to know is to track whether the sample byte has been played yet.
[NONMODULAR] get this ugly ass ID shit off our chests (#3849)
-
get this ugly ass ID shit off our chests
-
okay I'll do it better
-
EYYYY
-
let's make this one file changed
-
proper commenting
i finally got location fucking working piece of shit i hate swift
Started Adding Sheep/Deer + Alex Confess Scene
-Added code base for sheep and deer.
-Started on pixel art, right now WIP deer and sheep horns, but they're in 1x1 instead of 2x2 and need to be fixed.
-Added extra scenes to Alex's confession, including altered lines if you push Alex away after confessing and the ability for Alex to kiss you during the confession, losing your first kiss romantically. (I want to make it so that Alex will reject you if their love is too low but last time I tried copying that if clause it didn't work so I'll need to do some testing first.)