Skip to content

Latest commit

 

History

History
679 lines (508 loc) · 40.7 KB

2021-02-26.md

File metadata and controls

679 lines (508 loc) · 40.7 KB

< 2021-02-26 >

2,824,737 events, 1,446,569 push events, 2,252,638 commit messages, 172,708,178 characters

Friday 2021-02-26 00:03:38 by Jordan Deren

Merge pull request #44 from Fujanetti/fucking-nav

fuck you motherfuck navbar


Friday 2021-02-26 01:19:13 by kil0byt3

EliteMeats and Nelston send their regards

  • RandomPortals
  • Portals are fucking dope now thanks to Nelston/kilo collab
  • we can configure a ton of shit
  • for now just re-implemented nether and aether portals
  • added coloring of portals and shit too
  • custom sounds for portals
  • nicer aether portal texture
  • disabled inspirations portal recoloring
  • disabled aether portals to use this one instead
  • rem NetherPortalFix because this does same thing
  • works good, tested
  • got rid of kinda shitty loading screen tip

Friday 2021-02-26 01:32:38 by Adam Paszke

Add support for non-zero (but still not-None) out_axes in pmap

Previously pmap didn't have the out_axes parameter (unlike vmap), but its semantics would match the specification of out_axes=0 (i.e. all outputs should be stacked along the first axis). This patch makes it possible to specify non-zero values for out_axes, but more importantly it lays down the groundwork for xmap which will have to use some extremely similar (if not the same) code paths.

One thing to note is that when I started this implementation I was also planning to add support for out_axes=None, which would allow us to stop using the unbroadcast hack, and most of the code is written with that in mind. Unfortunately it turned out that the correct implementation of the transpose rule for maps that do allow unmapped outputs would require me to pretty much simulate what avals-with-names is supposed to achieve. Technically replicated outputs should work today, for as long as the user does not do reverse-mode AD of pmap. But I decided that it's better to just disable them altogether until we can get the full and correct behavior.

  • Implementation details *

This patch is significantly more involved than the one that implemented general in_axes support. That previous one at least had the foundation of mapped_invars which already behaved pretty similarly to general in_axes. From a quick glance one might think that out_axes should behave similarly to in_axes, but it turns out that this is not the case, at least not if we're interested in keeping those primitives final-style.

** Thunking **

The biggest difficulty with handling out_axes in final style primitives is that we want to treat them as a prefix of the output pytree, but we don't know the structure of the output pytree until the user function is evaluated! And the user function is not evaluated until we've applied all transforms and reached the impl rule! The solution to this problem is "straightforward": instead of putting out_axes as a primitive parameter, we bundle an out_axes_thunk which can only be called successfully after the wrapped function has been executed. The thunk returns a list of flat out_axes, expanded to the output pytree. However, the thunking presents us with two problems:

*** Transformations ***

Each transformation that modifies the number of outputs needs to ensure that the thunk is updated to reflect the new values. To make things worse a lot of the transforms can learn the number of added outputs only after the wrapped function is evaluated, which leads to the following "time travel" pattern that can be found in most Traces:

@lu.transformation_with_aux
def compute_output_statistic(*args, **kwargs):
  outputs = yield args, kwargs
  yield outputs, compute_statistic(outputs)
wrapped_fun, output_statistic = compute_output_statistic(wrapped_fun)
def new_out_axes_thunk():
  old_out_axes = params['out_axes_thunk']()
  return compute_new_out_axes(old_out_axes(), output_statistic())
primitive.bind(wrapped_fun, dict(params, out_axes_thunk=new_out_axes_thunk))

The reason why we have to structure the code this way is that we can only specify a new out_axes_thunk before we bind the primitive, but we need the outputs of bind to know how to update the out_axes_thunk. To make things worse, the implementation of bind is allowed to make a call to out_axes_thunk immediately after wrapped_fun is evaluated. This means that we cannot compute the output statistic in the implementation of the transformation, but we have to use an extra lu.transformation_with_aux for that (this populates the statistic store immediately after wrapped_fun is evaluated).

The compute_statistic function depends on the transform in question. E.g. in the JVP trace it counts the number of non-zero tangent results.

The situation is of course further complicated when we take post_process_map into account. The new process_env_traces now always sets up this funny time travel trampoline just in case it ends up being necessary, and post_process_map is now expected to return (outputs, (todo, out_axes_transform)) instead of just (outputs, todo).

*** Compilation cache ***

Because the out_axes_thunks are now arguments to a global compilation cache (in the form of lu.cache decorator on parallel_callable), we have to ensure that they implement hash and ==. This is what forces us to add some slightly weird helpers such as _hashable_function and _ignore_elem_list. The code that uses those makes an assumption that the output pytree depends deterministically on the identity of the wrapped function, which I think is in line with general JAX assumptions. Otherwise the cache would depend on the identity of the thunk, which changes with every function invocation.

Relaxing the global constraint on the cache (e.g. allowing each pmap(f) instance to have a separate cache) would make this easier too.

  • Why final style? *

Now, making the primitives initial-style would remove the necessity for thunking, because we could have obtained the output pytree right when the function is wrapped. I assumed there is a good argument for making pmap pretend that it's a final-style primitive, but I'm not sure why that is? I hope it's something better than just avoiding a single jaxpr tracing.


Friday 2021-02-26 01:40:09 by Jean-Paul R. Soucy

New data: 2021-02-25: See data notes.

Revise historical data: cases (BC, MB, ON, SK).

Note regarding deaths added in QC today: “16 new deaths, but the total of deaths amounts to 10,361 due to the withdrawal of 1 death not attributable to COVID-19: 5 deaths in the last 24 hours, 9 deaths between February 18 and February 23, 2 deaths before February 18.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.

Recent changes:

2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:

  1. Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
  2. Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
  3. The file codebook.csv has been moved to the directory “other”.

We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.

  • 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.

Vaccine datasets:

  • 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
  • 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
  • 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
  • 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.

https://www.quebec.ca/en/health/health-issues/a-z/2019-coronavirus/situation-coronavirus-in-quebec/#c47900

Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.

SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.

For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.


Friday 2021-02-26 01:44:50 by ochlocracy

Initial commit for ticker bot. A lot of random changes. My commit comments suck and I will hate myself later for it.


Friday 2021-02-26 03:43:35 by Magical Marvelous MADMADMAD Mister Mim !

Updated links with messages of nurturing encouragement in their style of speech, only more direct and less 'love me i "love" you, you'll find out later i'm fucking scum" etc etc.


Friday 2021-02-26 04:47:10 by Nat Mote

Include sighashes in saved state

Summary: This diff includes the sighash for each file in the saved state, loads the sighash from saved state if the flag is turned on, and makes the changes needed to take advantage of those loaded sighashes for recheck optimizations.

Implementation

This change is made up of two main components:

  • First, there is the machinery to save the sig hashes to saved state and then load them again (on the condition that the appropriate flowconfig flag is set). This component makes up the bulk of the changes here, but is very simple.
  • Second, there are the changes to Merge_stream and Context_heaps that change the merging logic so that we can actually take advantage of these loaded sighashes. These changes are more interesting, and deserve special attention.

I'll focus on this second component here.

At first blush, it may seem like it is be possible to simply load in the sighashes from saved state, and allow the existing machinery to skip merging and checking whenever the newly-computed sighashes match the old, loaded ones. Unfortunately, this is a fantasy.

The issue is that we may skip merging a component because none of its dependencies changed, which would mean that we still don't have a sig context for it. Then, we may later need to merge or check a downstream component (for example, if another of its dependencies has changed), and to do that we'll need the missing sig context.

To address this issue, it would be possible to carefully track every component that has a missing sig context, and backtrack if it is needed for a downstream file. I may still implement such logic in the future. However, for now I think it makes sense to simply merge every file with a missing sig context, but use the newly-available sighashes to drive skipping in the check phase. This is the approach that I have taken in this diff.

Previously, we didn't distinguish between newly-added sig contexts and those that changed. Now, however, we have the ability to know that a sig context has not changed (based on the sighashes loaded from saved state) even when it is new. So, we need to add a case for that -- if the sighashes are the same, but there wasn't an old sig context, we need to write it, but not add it to the sig_new_or_changed set. This is what the changes in Merge_stream and Context_heaps do.

So, on the initial recheck after a lazy init, we will merge all of the files that we consider for merge, just like we do today. However, we will use the sighashes to gather a set of files that have actually changed, and use that set to determine which files we can skip in the check phase. This should, in theory, reduce the number of files we check in the check phase of an initial recheck after a lazy init, thereby reducing the time spent in these early rechecks.

Caveats

  • There are some performance ramifications (see below).
  • There are some serious hash stability issues that hamper the effectiveness of this optimization:
    • When testing on real-world inputs, I found that ~40% of files had sighashes that were unstable from run-to-run. They appear to be stable for the life of a given Flow server. I have not investigated this further. I believe that the hashes computed by types-first 2.0 will be more stable, so I am going to put off a detailed investigation in the hopes that this resolves itself.
    • The hashes include the absolute file paths of various source files, so they will not be stable from machine-to-machine or repo-to-repo unless the absolute path to the repo is identical across invocations. This assumption is unlikely to hold in practice. I plan to address this issue next.
  • This may cause us to miss some already-committed errors, similar to how we already can miss already-committed errors in lazy mode. However, the extent to which this is true would increase: today, if you modify a file, Flow will display errors for all of its dependents after the initial lazy recheck. With this change, Flow would only display errors for dependents if your changes might have actually affected those dependents. This could be a plus or a minus depending on your point of view.
  • As part of the flow status output, when Flow is running in lazy mode, we report the number of files that are currently being checked. Without additional changes, we will now report the number of files that we have at one point considered for checking, even if we ended up skipping them. This might merit some additional thought, but it's unlikely to be a noticeable difference for users in practice.

Performance

This slows down startup time when turned on (but only barely). It does not appear to affect startup time when turned off. It also slightly increases the size of the saved state blob.

When testing on a benchmark recheck, it was only able to skip checking for less than 1% of the files on an initial recheck, due to the hash stability issues mentioned above. However, tests pass, and this has no measurable performance regression when it's turned off, so I would like to land it and then work on improving hash stability later.

Once the hash stability issues are worked out, we should be able to skip as many files in the check phase of an initial lazy recheck as we would for any other recheck. I believe this will be a significant performance improvement, though we will still have to pay the cost of parsing and merging all of the files. Fortunately, the cost of merging will go down dramatically with types-first 2.0, and the check phase is the most costly even today.

Reviewed By: samwgoldman

Differential Revision: D25137486

fbshipit-source-id: 3e9e5846baec287c880211b13171629668457dde


Friday 2021-02-26 04:56:17 by Ícaro C. Capobianco

FUCK YOU YARN FUCK YOU I spent hours learning to configure this SHIT properly and it wants me to fucking have a fucking binary file in my repo, FUCK YOU yarn v2 yarnpkg/berry#2474


Friday 2021-02-26 04:56:17 by Ícaro C. Capobianco

FUCK YARN V2

yarnpkg/berry#2474 Literally fuck you I am not going to add your fucking binary files to my fucking repo because you feel like forcing that on me, configuration files is understandable but BINARY, FUCK OFF!

Code changes unrelated. Added script for building pulling open source code


Friday 2021-02-26 05:03:57 by Andrew Clark

Experiment: Lazily propagate context changes

When a context provider changes, we scan the tree for matching consumers and mark them as dirty so that we know they have pending work. This prevents us from bailing out if, say, an intermediate wrapper is memoized.

Currently, we propagate these changes eagerly, at the provider.

However, in many cases, we would have ended up visiting the consumer nodes anyway, as part of the normal render traversal, because there's no memoized node in between that bails out.

We can save CPU cycles by propagating changes only when we hit a memoized component — so, instead of propagating eagerly at the provider, we propagate lazily if or when something bails out.

Another neat optimization is that if multiple context providers change simultaneously, we don't need to propagate all of them; we can stop propagating as soon as one of them matches a deep consumer. This works even though the providers have consumers in different parts of the tree, because we'll pick up the propagation algorithm again during the next nested bailout.

Most of our bailout logic is centralized in bailoutOnAlreadyFinishedWork, so this ended up being not that difficult to implement correctly.

There are some exceptions: Suspense and Offscreen. Those are special because they sometimes defer the rendering of their children to a completely separate render cycle. In those cases, we must take extra care to propagate all the context changes, not just the first one.

I'm pleasantly surprised at how little I needed to change in this initial implementation. I was worried I'd have to use the reconciler fork, but I ended up being able to wrap all my changes in a regular feature flag. So, we could run an experiment in parallel to our other ones.

I do consider this a risky rollout overall because of the potential for subtle semantic deviations. However, the model is simple enough that I don't expect us to have trouble fixing regressions if or when they arise during internal dogfooding.


This is largely based on RFC #118, by @gnoff. I did deviate in some of the implementation details, though.

The main one is how I chose to track context changes. Instead of storing a dirty flag on the stack, I added a memoizedValue field to the context dependency object. Then, to check if something has changed, the consumer compares the new context value to the old (memoized) one.

This is necessary because of Suspense and Offscreen — those components defer work from one render into a later one. When the subtree continues rendering, the stack from the previous render is no longer available. But the memoized values on the dependencies list are. (Refer to the previous commit where I implemented this as its own atomic change.) This requires a bit more work when a consumer bails out, but nothing considerable, and there are ways we could optimize it even further. Concpeutally, this model is really appealing, since it matches how our other features "reactively" detect changes — useMemo, useEffect, getDerivedStateFromProps, the built-in cache, and so on.

I also intentionally dropped support for unstable_calculateChangedBits. We're planning to remove this API anyway before the next major release, in favor of context selectors. It's an unstable feature that we never advertised; I don't think it's seen much adoption.

Co-Authored-By: Josh Story [email protected]


Friday 2021-02-26 05:11:26 by Kyle K

last 5 weeks was and still is hell, cannot get iso booting, some aufs issues...im so tired, i want my life back, why am i doing so much pain to myself, wasted so much time, and for what? could have use this time elsewhere to improve my sad life


Friday 2021-02-26 12:00:30 by Thomas Orgis

libdepengine: avoid building dependees after dependencies failed

This is another hack on the horrid sorcery logic of running many times through the same code to do different things. The problem: Building of a dependency fails, but sorcery still tries to build the spell depending on the failed one, usually resulting in predictable failure, otherwise in a perhaps unintended build with an older installed version of the dependency.

Example was nss failing for me (still have to figure out that weird one) and still firefox cast bein attempted after that. This is stupid behaviour.

I solve that by grepping for the spell in $FAILED_LIST and using occurence there to indicate a nonzero return value where there is no value recorded from the depengine_cast_engine. The only justification is that it works.

Caveat: It only works to avoid nonsense in the casting phase. If things fail in configure phase, we still have the pecuilarity that dependees are removed from the cast only if they have been processed before. If you specified

cast a b

where b depends on a and configuration of a fails, b is still attempted to be cast. This is for another night, or a proper rewrite.

Note: There are other places with grep $spell $FAILED_LIST. I suggest checking if that should read "^$spell$" instead there, too.


Friday 2021-02-26 12:58:14 by Karl Jan

This is my first release commit letsgo

Dear myself you can do it fuck you


Friday 2021-02-26 13:30:43 by moth1995

Update on dockerfile for some older libraries

I remember a year ago when i started my server trying to use docker because at that time i loved docker, so i encounter with many issue then i modified the dockerfile, this is how it end up, the original one didnt worked at first start, i tried this some minutes ago and it worked again, maybe will be useful for other people

Sorry if i added some unnecesary stuff at that time i just started adding and adding stuff to see if it worked


Friday 2021-02-26 13:54:05 by Dirantai

Added on to the lasers

well they deal damage. And fuck you real good.


Friday 2021-02-26 14:12:45 by Running Child

Updates made by Running Child

I added #Start and #End tags where I edited or added commands.

I added in bot.py: 1)Load (Users in admins_list only) 2)Reload (Users in admins_list only) 3)Unload (Users in admins_list only)

Added in cogs: 1)Do and Make: These two are kinda fun commands. like: py/make me sandwich and it returns random answer.

2)OSU Command combinations(in Misc.py) Returns user info according to given options usage: py/osu option:required username:required mode:optional option can be best/recent/user mode can be standart/taiko/ctb/mania

3)Needed things: 1)Avatar: Ig it doesn't need explaination 1)Userinfo: Returns user info in embed 2)Serverinfo: Returns server info in embed 3)Vote: Makes vote embed for 2 options. usage: py/vote <countdown:optional int> <arg1,arg2>

4)quote: Returns quote 5)Anilist: Returns detailed info about anime/manga. Usage: py/ <anime/manga(one of them)> Note about why I didn't use anilist: Anilist has API Token which is expiring after 1 year but anilist is modern + uses graphiql + required no token + (and ofc I coudln't work with MyAnimeList's API xD )

4)Memes 1)Meme: Returns meme from r/Memes 2)Animeme: Returns animeme from r/Animemes 3)Cursed: Returns cursed comment pic from r/cursedcursedcomments r/Cursed required Reddit API and it would make it slow + creepy. so ig r/cursedcursedcomments is best desicion for now.

5)Status: 1)Status: Changes status and saves it(for applying status on restart,add it in commands.Bot part) usage: py/status


Friday 2021-02-26 18:30:50 by dkb208

Create index.html

<title>Hank Quinlan, Horrible Cop</title>

Hi there, I'm Hank Quinlan!

I'm best known as the horrible cop from A Touch of Evil Don't trust me. Read more about my life...


Friday 2021-02-26 18:53:19 by Kenneth Lum

Is the bail out to not render the whole component?

Could you double check with somebody who knows setState() really well? (Dan?) It seems like the bail out is that React will not render the component itself. It is not that it will not render the "children".

For a moment I was thinking that it will somehow render the component but not the children, so what does that mean -- does it mean for:

return (
  <div>
    <span>... </span>
  </div>
);

that React will render the <div> but not the <span>? It seems that React will not render the whole component:

In https://codesandbox.io/s/inspiring-rubin-ukxme?file=/src/App.js

In the Developer's console, the first click would setCount(0), and then setCount(1), setCount(1), so it would set to the same count twice.

So if we inspect the <div className="App" a={Math.random()} b={Date.now()}>, we can see that it is not changed if it is setCount() to the same value as before.

So that means, it is not rendering the whole component, not the "children". I think the use of the word "children" is a little bit confusing, because I think when we say component, that's everything the return is returning, and since none of that whole fragment is rendered, so therefore I think it is "If you update a State Hook to the same value as the current state, React will bail out without rendering the component or firing effects."

However, I noticed that the function is still entered, as the console.log("Coming into the function"); shows, and I wonder if you'd like to add that into the docs. Supposedly, since any effect and render is not done, it shouldn't really do anything, except for any possible side effects. If there is no side effects, it would be as if the function is never invoked.


Friday 2021-02-26 20:22:53 by PavanAla

Update daily problems

PROBLEM STATEMENTPoints: 30 There is a magical shop owned by The Monk, which consists of magical potions. On the first day there are A number of potions. Let, potions[I] denote the number of potions present in the shop on the Ith day.

potions[I] = potions[I-1] * potions[I-1] You, Monk's favorite student love to play around with various types of potions. So, you visit the shop regularly and buy all the potions and just keep them with yourself. However, the Monk allows you to go to the shop only on specific days.

You have to find the number of potions that you will have at the very end. Since, the answer can be very large, output it modulo B.

Input Format: The first line will contain A and B, the number of potions on the first day and the value with which modulo is to be taken. The second line consists of a string consisting of "0" and "1". If the Ith character from left is 1 then you can go to shop on Ith day to buy potions.

Output Format: Output on a single line, the number of potions that you will have at the end. Since, the answer can be very large, output it modulo B.

Constraints: 1 <= A, B <= 109 1 <= |Length of the string| <= 105 The string will only consist of 1 and 0.

SAMPLE INPUT 5 100 101 SAMPLE OUTPUT 30 Explanation On the first day, the number of potions are 5. Since The Monk allowed you to go to the magical shop to buy the potions. He buy all the potions, you have 5 potions now. On the second day, the number of potions are 55 = 25. Since, the number of potions in the shop on the first day restores to 5 after you leave the shop. However, you cannot go to the shop on the second day, since The Monk doesn't allow you. On the third day, the number of potions are 2525 = 625. Since The Monk allowed him to go to the magical shop to buy the potions. He buys all the potions, he has 5+625 = 630 potions. The answer must be given modulo B, 630 mod 100 = 30


Friday 2021-02-26 21:07:36 by Thorinwasher

..... Stupid

Who the fuck wrote this??? Just unnecessarily complicated shit. Just makes everything harder to understand


Friday 2021-02-26 21:33:59 by Nicolas Jonas

Oh FUCK your ridiculous SJW bullshit. Holy fuck what kind of pussies you pander to? Offended by the master branch. How pathetic can people be?


Friday 2021-02-26 21:39:21 by Mark Meves

feat(kiss-db:pho): introduce “audit trail” ((1604))

Blood was boiling while on this tangent stack:

➡ We want to get to publishing (don’t we?) ⮑ …so we need “document history” ⮑ …so we have to integrate a whole lot of things like: ⮑ generate generic audit trail for a kiss-rdb eno entity ⮑ combine several of above to create compound audit trail for docu. ⮑ generate an audit trail for an as-is document ⮑ figure out what data model is for gathering statistics ⮑ write just-enough kiss-rdb adapter for sqlite3 for above ⮑ oh yeah this wild-ass “schema from GraphViz” craziness ⮑ gotta cover the kiss-rdb entity audit trial that works ⮑ …so we to write yet another “record system responses” thing

But the real clincher was that once we spent 4 hours writing the feature and 2 days writing the craziness to cover it after-the-fact, we hit an all-day bug involving finding the “stop” line of an entity that we broke during refactoring.

Random notes:

  • We sanitized out our own name from the git log fixture data “by hand” because dealing with pipes to pipes etc of sed is too much, and writing a custom sanitization script feels like overkill too

(times) 02-21 23:13 begin feature addd to old diff parser 17:38 finished writing white paper for kiss-rdb eno per-entity audit trail 22:53 finished hacking together "retrieval" 02-23 00:52 finally begin building audit trail 02-23 10:18 we see the path. getting edit diffs (raw) 15:20 got patching manually by hand and it's good but we need to run and dance 19:36 resume. review how file readers work 02:10 milestone: first rough visual of audit trail going all the way back (on PBR) 02-24 12:21 lol we were hoping to find some bugs in 'audit trail' so we could enjoy covering it but turns out it's way more stable than we expected. So now we have the mundane task of covering it without the coverage really test-driving it. But es muss sein, this is what happens whey you do 'prototype-driven development' 16:29 we are on a huge tangent stack (just added to the stack). progress is good but we gotta go run 22:01 begin moving system call fixtures to [ma] 02-25 09:07 ok really really. parse the recfile for the installation of the performer to make the fixture to cover the audit trail to make the document history 15:08 ready to pop out of making this fixture and into retrieving this fixture 15:18 holy smokes we are writing a test for audit trail now 15:25 battery about to die 01:25 augh tests are going well but zz 02-26 09:30 (review of today: giant bug) 02-26 16:04 All green, begin writing commit comment .


Friday 2021-02-26 21:42:50 by Jacob Bandes-Storch

Worldview: add GLText command (#291)

Summary

Adds a new <GLText /> command which is API-compatible with <Text />. Instead of overlaying DOM nodes on top of Worldview, the text is rendered in GL so it can participate in occlusion. Rendering in GL is also much faster than managing many separate DOM elements; see the GLText docs page for a performance demo.

All the necessary characters are rendered into a texture atlas using a signed distance field representation, then all characters from all children are drawn in a single draw call. More implementation details are explained in GLText.js.

GLText demo

Many thanks to the following resources/implementations/references which were helpful in informing this implementation!

Test plan

Added stories/screenshot tests and docs page demo. Unfortunately the code is not too easily amenable to unit testing (other than perhaps the font atlas memoization).

Versioning impact

This PR is purely additive, so doesn't necessitate a major version bump. The GLText command might eventually replace Text, but we'd like to get some mileage on it first — we'll be testing it in Webviz, and would love for any external users to give it a try as well. A few areas for improvement would probably need to be addressed before deprecating the DOM-based Text command: solid rectangular background instead of outline (or at least improved visibility at small sizes); highlight ranges to replace browser-native ⌘F; hitmap support.


Friday 2021-02-26 23:44:02 by Thesoooped

Revert "Fuck you Hero Manager"

This reverts commit 369eda50f6a46fc74534b9af712f1ff0944806f6.


Friday 2021-02-26 23:47:25 by Thesoooped

Revert "Revert "Fuck you Hero Manager""

This reverts commit bf1c1072a922c1446b445ee0f0390f6ab214c319.


< 2021-02-26 >