Skip to content

Latest commit

 

History

History
724 lines (508 loc) · 36.7 KB

2020-04-01.md

File metadata and controls

724 lines (508 loc) · 36.7 KB

< 2020-04-01 >

2,620,009 events, 1,313,331 push events, 2,083,321 commit messages, 161,981,305 characters

Wednesday 2020-04-01 00:08:18 by Jason Grout

Allow notebook format version 4.1 mimebundle keys ending in +json to have arbitrary JSON

There are a number of people posting issues with nbformat 5 being stricter about validating notebook format 4.1, including:

jupyter/nbformat#160 jupyter/nbformat#161 jupyter-widgets/ipywidgets#2553 jupyter-widgets/ipywidgets#2788

Essentially, nbformat package version 4.x allowed noncompliant format 4.1 notebooks to be verified as valid, leading to many notebooks in the wild having major/minor format version 4.1, but with widgets and other json outputs that were technically invalid.

Upgrading to nbformat package 5.x correctly flagged these notebook as noncompliant. This is correct technically. However, practically it means that all these notebooks files tagged as format 4.1 that were working fine suddenly won't even open after upgrading to nbformat version 5. This is a pain.

This retroactively upgrades the format 4.1 schema to allow json in these cases, since in practice there are lots of notebooks labeled as format 4.1, I think by official Jupyter software, that have json in the mimebundle output. Essentially, this acknowledges that in the official implementations from Jupyter, notebook format 4.1 has indeed had arbitrary JSON values in mimebundles, and we cannot in good conscience decree it invalid.


Wednesday 2020-04-01 00:56:30 by TheGrim159

I fucking hate my life

did some fucking bullshit and somehow found my code for this shit but you are fucking making the candy prefab because you are going to drag the sprite into unity and make the prefab and you are going to add the projectile code to the prefab and then add that prefab to the projectile slot in the movement code so that there are no issues with unity


Wednesday 2020-04-01 00:56:30 by schoi369

Merge pull request #16 from TheGrim159/me

I fucking hate my life


Wednesday 2020-04-01 00:58:06 by Tom Scogland

llvm: libomptarget support (#14060)

This allows the llvm build to support:

  • clang cuda
  • libomptarget for:
    • current host
    • cuda
  • bitcode compilation of libomptarget device runtime for inlining by bootstrapping libomptarget
  • split dwarf information support as an option for debug builds, if you need a debug build, for the love of all that's good in the universe use this flag
  • adds necessary dependencies for shared library builds and libomp and libomp target to build correctly
  • new version of z3 to make it sufficient to build recent llvm

The actual change is much smaller than the diff, this is because it's been formatted with black. I realize this kinda sucks right now, but I'm hoping it will make future updates here less painful.


Wednesday 2020-04-01 03:33:24 by Christopher Haster

Fixed more bugs, mostly related to ENOSPC on different geometries

Fixes:

  • Fixed reproducability issue when we can't read a directory revision
  • Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
  • Fixed cleanup issue caused by lfs_fs_relocate failing when trying to outline a file in lfs_file_sync
  • Fixed cleanup issue if we run out of space while extending a CTZ skip-list
  • Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan

Also:

  • Added cycle-detection to readtree.py
  • Allowed pseudo-C expressions in test conditions (and it's beautifully hacky, see line 187 of test.py)
  • Better handling of ctrl-C during test runs
  • Added build-only mode to test.py
  • Limited stdout of test failures to 5 lines unless in verbose mode

Explanation of fixes below

  1. Fixed reproducability issue when we can't read a directory revision

    An interesting subtlety of the block-device layer is that the block-device is allowed to return LFS_ERR_CORRUPT on reads to untouched blocks. This can easily happen if a user is using ECC or some sort of CMAC on their blocks. Normally we never run into this, except for the optimization around directory revisions where we use uninitialized data to start our revision count.

    We correctly handle this case by ignoring whats on disk if the read fails, but end up using unitialized RAM instead. This is not an issue for normal use, though it can lead to a small information leak. However it creates a big problem for reproducability, which is very helpful for debugging.

    I ended up running into a case where the RAM values for the revision count was different, causing two identical runs to wear-level at different times, leading to one version running out of space before a bug occured because it expanded the superblock early.

  2. Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size

    This could be caused if the previous tag was a valid commit and we lost power causing a partially written tag as the start of a new commit.

    Fortunately we already have a separate condition for exceeding the block size, so we can force that case to always treat the mdir as unerased.

  3. Fixed cleanup issue caused by lfs_fs_relocate failing when trying to outline a file in lfs_file_sync

    Most operations involving metadata-pairs treat the mdir struct as entirely temporary and throw it out if any error occurs. Except for lfs_file_sync since the mdir is also a part of the file struct.

    This is relevant because of a cleanup issue in lfs_dir_compact that usually doesn't have side-effects. The issue is that lfs_fs_relocate can fail. It needs to allocate new blocks to relocate to, and as the disk reaches its end of life, it can fail with ENOSPC quite often.

    If lfs_fs_relocate fails, the containing lfs_dir_compact would return immediately without restoring the previous state of the mdir. If a new commit comes in on the same mdir, the old state left there could corrupt the filesystem.

    It's interesting to note this is forced to happen in lfs_file_sync, since it always tries to outline the file if it gets ENOSPC (ENOSPC can mean both no blocks to allocate and that the mdir is full). I'm not actually sure this bit of code is necessary anymore, we may be able to remove it.

  4. Fixed cleanup issue if we run out of space while extending a CTZ skip-list

    The actually CTZ skip-list logic itself hasn't been touched in more than a year at this point, so I was surprised to find a bug here. But it turns out the CTZ skip-list could be put in an invalid state if we run out of space while trying to extend the skip-list.

    This only becomes a problem if we keep the file open, clean up some space elsewhere, and then continue to write to the open file without modifying it. Fortunately an easy fix.

  5. Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan

    This was a really interesting bug. Normally, we don't have to worry about allocations, since we force consistency before we are allowed to allocate blocks. But what about the deorphan operation itself? Don't we need to allocate blocks if we relocate while deorphaning?

    It turns out the deorphan operation can lead to allocating blocks while there's still orphans and half-orphans on the threaded linked-list. Orphans aren't an issue, but half-orphans may contain references to blocks in the outdated half, which doesn't get scanned during the normal allocation pass.

    Fortunately we already fetch directory entries to check CTZ lists, so we can also check half-orphans here. However this causes lfs_fs_traverse to duplicate all metadata-pairs, not sure what to do about this yet.


Wednesday 2020-04-01 04:19:57 by Irradiation

Renamed every instance of "plastique" (c4 explosives) to "c4" (#25924)

  • Renamed every instance of "plastique" (c4 explosives) to "c4"

This is in the name of every admin out here and anybody doing testing. Fuck you old c*ders.

  • fuck you plosky and old test map nobody uses

  • PLOOOSKKYYYYY


Wednesday 2020-04-01 05:43:28 by verawilde

take out test footnote for now

didn't work and site is live!

was under know the facts, mitigation -

{{ site.footnote.start }}You may have heard the phrase "flatten the curve" in this context. This refers to the idea that after containment of a new disease-causing pathogen has failed and a global pandemic is on, like now, the best we can do is to mitigate -- trying to flatten the curve of new cases requiring medical care so that it stays within the capacity of the medical system. This should decrease deaths and buy us time to better prepare -- making more respirators / masks and other personal protective equipment, ventilators and retrained medical professionals, and maybe even treatments or vaccines to address the disease. Some critics argue that what we actually want from mitigation measures, however, is really still to stop the spread -- squash the curve, if you will -- and to get back to containment. Because when they put feasible numbers on the curves, they find that we would have to "flatten the curve" -- spread the pandemic out -- for more than a decade in order to stay within the capacity of the medical system. It could be argued that once containment has failed, we can't get it back as a feasible strategy, however, and then using mitigation measures for mitigation's sake makes perfect sense. Anyway, due to this controversy we just avoid using the phrase "flatten the curve" (which may not work for its intended purpose of keeping the pandemic within medical systems' capacities) in favor of "mitigation" and "slowing the spread" (which can still work even if medical systems get overwhelmed. {{ site.footnote.end }}.


Wednesday 2020-04-01 08:03:53 by Simon Peyton Jones

Re-engineer the binder-swap transformation

The binder-swap transformation is implemented by the occurrence analyser -- see Note [Binder swap] in OccurAnal. However it had a very nasty corner in it, for the case where the case scrutinee was a GlobalId. This led to trouble and hacks, and ultimately to #16296.

This patch re-engineers how the occurrence analyser implements the binder-swap, by actually carrying out a substitution rather than by adding a let-binding. It's all described in Note [The binder-swap substitution].

I did a few other things along the way

  • Fix a bug in StgCse, which could allow a loop breaker to be CSE'd away. See Note [Care with loop breakers] in StgCse. I think it can only show up if occurrence analyser sets up bad loop breakers, but still.

  • Better commenting in SimplUtils.prepareAlts

  • A little refactoring in CoreUnfold; nothing significant e.g. rename CoreUnfold.mkTopUnfolding to mkFinalUnfolding

  • Renamed CoreSyn.isFragileUnfolding to hasCoreUnfolding

  • Move mkRuleInfo to CoreFVs

We observed respectively 4.6% and 5.9% allocation decreases for the following tests:

Metric Decrease: T9961 haddock.base


Wednesday 2020-04-01 08:26:36 by Andrei Mataoanu

Added some Animations, Jetpack can charge in air now

Added Enemy Health Bar, but it's not right yet

Added Better AI, Added Player Health + UI, Added Enemy Health Bar

The Enemy shoots at players, but tracks the target poorly.

And they do no damage to you because screw colliders. Enemies now deal damage to your ass

The Enemy tracks the player much better

Added muzzle flash to weapons

Mini fix

Rebase fix

Some Bug Fixing and Rebase Conflict Fixing

Enemy HealthBar UI is refactored, Added Folders

Forgot about the Knight Scene but I discovered a bug yay

Each Enemy can shoot you now

Enemy health bar works and updates properly

Oops forgot to update something. We gucci now.


Wednesday 2020-04-01 10:06:40 by Marko Grdinić

"10:05am. I am up.

10:10am. Let me get started on this immediately. I want to study FRP. I am finally near the interesting parts. After I am done with this book, I'll go back to the old one and finish that thing. Then comes the Akka book. After that comes the ReactiveUI.

After that comes real programming. I want to acquire this power and go to the next phase of my programming journey.

Just what would it be like to be able to abstract things like UIs? I want to find out.

10:35am. 235/337. Yesterday I was too tired for this, but now it is coming to me easily.

11am. 241/337. I am confused. Just how could Window exist given the interface constraints?

How can the scan that comes after it be reset?

11:05am. I do not get this. Ah, forget it. Let me just move on. I am sure I will figure it out eventually when I look at the source.

11:15am. 245/337. Done with chapter 9. Let me study the code examples for a bit. Then comes 10. Only 60 pages left and then comes the appendix. I am close to the end of the book. I should be done with it today.

var windows = numbers.Window(TimeSpan.FromMilliseconds(200));

Ah, it creates an observable of observables. That explains it all.

        private static void ControllingTheWindowClosure()
        {
            Subject<int> numbers=new Subject<int>();
            Subject<Unit> mouseClicks = new Subject<Unit>();
            var windows=numbers.Window(() => mouseClicks);
        }

This is a good example. I need to keep in mind that events can control the window closures.

        public static IObservable<T> DoLast<T>(this IObservable<T> observable, Action lastAction, TimeSpan? delay = null)
        {
            Action delayedLastAction = async () =>
            {
                if (delay.HasValue)
                {
                    await Task.Delay(delay.Value);
                }
                lastAction();
            };
            return observable.Do(
                (_) => { },//empty OnNext
                _ => delayedLastAction(),
                delayedLastAction);
        }

        public static void RunExample<T>(this IObservable<T> observable, string exampleName = "")
        {
            var exampleResetEvent = new AutoResetEvent(false);

            observable
                 .DoLast(() => exampleResetEvent.Set(), TimeSpan.FromSeconds(3))
                 .SubscribeConsole(exampleName);

            exampleResetEvent.WaitOne();

        }

I am looking at this and wondering whether this could have been written as...

            observable
                .Delay(TimeSpan.FromSeconds(3))
                .Finally(async () => exampleResetEvent.Set())

This. Let me run it.

No, it is not the same thing. For some reason when I do it like that, both windows get created at the same time.

        public static void RunExample<T>(this IObservable<T> observable, string exampleName = "")
        {
            var exampleResetEvent = new AutoResetEvent(false);

            observable
                 //.Delay(TimeSpan.FromSeconds(3))
                 //.Finally(async () => exampleResetEvent.Set())
                 //.DoLast(() => exampleResetEvent.Set(), TimeSpan.FromSeconds(3))
                 .SubscribeConsole(exampleName);

            //exampleResetEvent.WaitOne();

        }

Ah, this damn thing is just to shift the 'press any key' message. How convoluted. It would have been easier to emit the done message.

And it is not any key, but a line, meaning the enter key needs to be pressed.

11:55am. Done with the examples. Now comes chapter 10. Let me just read a few pages and then I'll stop for breakfast.

12:05pm. 247/337. Let me stop here."


Wednesday 2020-04-01 11:03:56 by Sandipan Ray

Systems biology approaches for understanding SARS-CoV-2 pathogenesis

Systems biology provides a cross-disciplinary analytical platform integrating the different omics (genomics, transcriptomics, proteomics, metabolomics, and other omics approaches), bioinformatics, and computational strategies. These cutting-edge research approaches have enormous potential to study the complexity of biological systems and human diseases (Hood and Tian, 2012, PMID: 23084773). Over the last decade, systems biology approaches are used vastly to study the pathogenesis of diverse types of life-threatening acute and chronic infectious diseases (Eckhardt et al., 2020, PMID: 32060427). Omics-based studies also provided meaningful information regarding host immune responses and surrogate protein markers in several viral, bacterial and protozoan infections (Ray et al., 2014, PMID: 24293340). The complex pathogenesis and clinical manifestations of SARS-CoV-2 infection are not understood adequately yet. A prominent breakthrough in 2019-nCoV research is accomplished with the successful full-length genome sequencing of the pathogen (Wu et al., Nature 2020, PMID: 32015508; Lu et al., Lancet. 2020, PMID: 32007145, and Zhou et al., 2020, PMID: 32015507). Multiple research groups drafted the genome sequence of SARS-CoV-2 from different clinical samples, including bronchoalveolar lavage fluid, blood, oral and anal swabs, and cultured isolates. Importantly, SARS-CoV-2 has significant sequence homology with SARS-CoV (about 79%) and also to some extent with MERS-CoV (about 50%) (Lu et al., Lancet. 2020, PMID: 32007145). However, a higher level of similarity (about 90%) has been observed between SARS-CoV-2 and bat-derived SARS-like coronaviruses (bat-SL-CoVZC45 and bat-SL-CoVZXC21) indicating its possible bat origin (Lu et al., Lancet. 2020, PMID: 32007145; Zhou et al., 2020, PMID: 32015507). The genome sequence of the pathogen subsequently allowed its phylogenetic characterization and prediction of its protein expression profile, which is crucial for understanding the pathogenesis and virulence of this novel viral infection. In a recent computational bioinformatics study, Grifoni et al. predicted potential B and T cell epitopes for SARS-CoV-2, proving some clue regarding which part of the pathogen is recognized by human immune responses (Grifoni et al., 2020, PMID: 32183941). Importantly, overactivation of T cells, caused by an increase of Th17 and high cytotoxicity of CD8 T cells, could be one of the major reasons behind the severe immune injury and fatality associated with SARS-CoV-2 infection (Xu et al., 2020, PMID: 32085846). Availability of the genome sequence of SARS-CoV-2 enhanced the possibilities for the subsequent proteome level studies that can provide further mechanistic insights into its complex pathogenesis. Of note, the cryo-electron microscopy structure of the SARS-CoV-2 spike (S) glycoprotein, which plays an important role in the early steps of viral infection, is reported very recently (Wrapp et al., 2020, PMID: 32075877). Even though any comprehensive proteomic analysis of the pathogen or the patients suffering from its infection has not been reported yet, one forthcoming study has demonstrated SARS-CoV-2 infected host cell proteomics using human Caco–2 cells as an infection model (Bojkova et al., 2020, doi:10.21203/rs.3.rs-17218/v1). The authors observed SARS-CoV-2 induced alterations in multiple vital physiological pathways, including translation, splicing, carbon metabolism and nucleic acid metabolism in the host cells. There is a high level of sequence homology between SARS-CoV-2 with SARS-CoV, and sera from convalescent SARS-CoV patients can effectively cross-neutralize SARS-CoV-2-S-driven entry (Hoffmann et al., 2020, PMID: 32142651). Consequently, earlier proteome level studies on SARS-CoV can also provide some essential information regarding the new pathogen (Chen et al., 2004, PMID: 15572443; He et al., 2004, PMID: 15020242). Considering the paucity of omics-level big data sets on for SARS-CoV-2 till now, the existing data hubs that contain adequate information for the other coronaviruses such as UniProt, NCBI Genome Database, The Immune Epitope Database and Analysis Resource (IEDB), The Virus Pathogen Resource (ViPR) could be the useful resources for computational and bioinformatics research on SARS-CoV-2.


Wednesday 2020-04-01 12:03:35 by topjohnwu

Introduce new boot flow to handle SAR 2SI

The existing method for handling legacy SAR is:

  1. Mount /sbin tmpfs overlay
  2. Dump all patched/new files into /sbin
  3. Magic mount root dir and re-exec patched stock init

With Android 11 removing the /sbin folder, it is quite obvious that things completely break down right in step 1.

To overcome this issue, we have to find a way to swap out the init binary AFTER we re-exec stock init. This is where 2SI comes to rescue!

2SI normal boot procedure is: 1st stage -> Load sepolicy -> 2nd stage -> boot continue...

2SI Magisk boot procedure is: MagiskInit 1st stage -> Stock 1st stage -> MagiskInit 2nd Stage -> -> Stock init load sepolicy -> Stock 2nd stage -> boot continue...

As you can see, the trick is to make stock 1st stage init re-exec back into MagiskInit so we can do our setup. This is possible by manipulating some ramdisk files on initramfs based 2SI devices (old ass non SAR devices AND super modern devices like Pixel 3/4), but not possible on device that are stuck using legacy SAR (device that are not that modern but not too old, like Pixel 1/2. Fucking Google logic!!)

This commit introduces a new way to intercept stock init re-exec flow: ptrace init with forked tracer, monitor PTRACE_EVENT_EXEC, then swap out the init file with bind mounts right before execv returns!

Going through this flow however will lose some necessary backup files, so some bookkeeping has to be done by making the tracer hold these files in memory and act as a daemon. 2nd stage MagiskInit will ack the daemon to release these files at the correct time.

It just works™ ¯_(ツ)_/¯


Wednesday 2020-04-01 12:17:37 by Arian van Putten

Reproducibility (#1027)

  • Add stack.yaml.lock and commit cabal files

By comitting stack.yaml.lock our build plan is fully reproducible. We've had several cases where people could not reproduce a build on develop before either due to cabal revisions or due to people accidentally touching snapshot files. stack.yaml.lock is a fixed build plan with hashes derived from your snapshots and stack.yaml

See https://github.com/commercialhaskell/stack/blob/master/doc/lock_files.md for more info

We also commit the cabal files. Not committing them also reduces reproducibility and the Stack people admit this now as well https://tech.fpcomplete.com/blog/storing-generated-cabal-files In the next stack release generating them will be the default.

  • Create a new snapshot that pins correct version of hsaml2

Also pin correct version of saml2-web-sso

I took the liberty to move away from our system of having snapshots in ./snapshots/*.yaml and have new snapshots refer to previous ones. This was a workaround for getting better caching in Stack 1.0 which is simply not needed anymore. We have a new single wire-snapshot.yaml file which you are always free to modify. Because Stack 2.0 uses content-addressed storage, making changes to this file will still cause most packages to be re-used.

With Stack 2.0 and the introduction of pantry, there is no semantic difference anymore between extra-deps in your stack.yaml and custom snapshots. We could thus fold everything into a single stack.yaml and keep the same benefits.

However custom snapshot format format gives you a bit more control (you can for example also remove packages from a snapshot, which is not possible in stack.yaml) Source

So we can just have one single wire-snapshot.yaml where we define external deps (and fold all the changes of all our snapshots into that single file). We do not have to make distinctions anymore between "Things that change often go into stack.yaml and things that change seldom go into snapshot.yaml" as under the hood they're all the same build and caching mechanism now.

We also do not have to worry about not editing this file anymore. They don't have to be treated as immutable as before. So no need to create a blah-2.0.yaml referring to blah-1.0.yaml to work around the fact that stack doesn't refetch snapshots after edited. From Stack documentation

They are assumed to be mutable, so you are free to modify it. We detect that the snapshot has changed by hashing the contents of the involved files, and using it to identify the snapshot internally. It is often reasonably efficient to modify a custom snapshot, due to stack sharing snapshot packages whenever possible.

We keep the ./snapshots folder for now; as our snapshots do not refer to themselves by relative path but by absolute URL which mentions the branch name and github repository. This is a bit of a shame as this means we can never remove them as this will break older builds :(

I do not know how much we care that old builds keep working, but having them work by promising some url to a mutable artifact stays around was never very good for reproducibility so maybe we should not care.. However; lets break this in a different PR after heavy discussion :)

  • Move away from snapshots completely

We fold everything into stack.yaml

I really do not see any difference anymore, and with pantry this change should be completely safe to make. We can now freely change stack.yaml and get the same caching guarantees as before. (That is; if we have a cache from a previous build that share 95% of the dependencies with the new stack.yaml, then CI will just gladly pick this up due to how Pantry works)

Again as noted before, what to do with the old 'snapshots/' folder I do not know yet. We need to keep it around to keep old builds working as they refer to it not by commit id but by 'develop' branch which is a moving target.

  • Ignore bounds for some dependencies

We were already doing this implicitly because we used custom snapshot before. Now we have to do it explicitly.

For good records, I attached the bounds issues to this commit. Note that at least multihash is maintained by ourselves. we should thus be able to solve the bounds issues for it

Error: While constructing the build plan, the following exceptions were encountered:

In the dependencies for HaskellNet-0.5.1: mime-mail-0.5.0 from stack configuration does not match >=0.4.7 && <0.5 (latest matching version is 0.4.14) needed due to brig-1.35.0 -> HaskellNet-0.5.1

In the dependencies for HaskellNet-SSL-0.3.4.1: connection-0.3.1 from stack configuration does not match >=0.2.7 && <0.3 (latest matching version is 0.2.8) needed due to brig-1.35.0 -> HaskellNet-SSL-0.3.4.1

In the dependencies for bloodhound-0.16.0.0: containers-0.6.0.1 from stack configuration does not match >=0.5.0.0 && <0.6 (latest matching version is 0.5.11.0) http-client-0.6.4 from stack configuration does not match >=0.4.30 && <0.6 (latest matching version is 0.5.14) needed due to brig-1.35.0 -> bloodhound-0.16.0.0

In the dependencies for multihash-0.1.6: base-4.12.0.0 from stack configuration does not match >=4.7 && <4.12 (latest matching version is 4.11.1.0) needed due to brig-1.35.0 -> multihash-0.1.6

In the dependencies for stm-hamt-1.2.0.4: primitive-0.6.4.0 from stack configuration does not match >=0.7 && <0.8 (latest matching version is 0.7.0.1) primitive-extras-0.7.1.1 from stack configuration does not match >=0.8 && <0.9 (latest matching version is 0.8) needed due to spar-0.1 -> stm-hamt-1.2.0.4


Wednesday 2020-04-01 16:48:46 by Neil Roza

do apt-key the stupid way

The latest versions of apt-key, gpg, and/or gpg2 helpfully and strenuously insist upon socket communication with gpg-agent.

With the introduction of GPG 2, the presence of a running gpg-agent is now required. Most (if not all) operations normally performed by the gpg program will fail if the client (the program you thought you had) cannot communicate with the server (the agent GNU foisted upon you). Because of course you wanted this.

This is a joy if, say, you like to play inside environments that lack a tty and/or permissions to create Unix sockets. The docker container is one such environment. But, cheer up --- it's only a problem if you want to do something silly like apt-key add a pubkey.

Any troglodyte without a gpg-agent should just gpg --dearmor their pubkeys directly into /etc/apt/trusted.gpg.d/ like the uncultured swine they are.


Wednesday 2020-04-01 17:31:09 by Revo

Fix the source of pain and suffering for the last 2 years.

Why wasn't this fixed yet. This is dumb. This is really dumb. This is fuckin' stupid.


Wednesday 2020-04-01 18:15:26 by Tim Otten

(POC) Add hook_civicrm_postCommit, a variant of hook_civicrm_post

Overview

So here's a pattern that's occurred a few times: one wants to provide an extra notification or log or correlated-record after something is chanaged or created. So you implement hook_civicrm_post. It sounds simple, but it doesn't work quite as expected - because running within the transaction can have some special implications:

  1. Performing subsequent SQL queries within the same transaction be wonky
  2. Errors in your notification bubble-up and break the transaction.

You can resolve this by deferring your work until the transaction completes. The technique has been discussed in various media over the years; e.g. here's a mention in the dev docs:

https://docs.civicrm.org/dev/en/latest/hooks/hook_civicrm_post/#example-with-transaction-callback

However, I think the problem may be that the default is basically backwards: there are more use-cases for 'post' hook in which you prefer to run after the transaction commits, but doing that case requires the special incantation.

This patch is a proof-of-concept in which the system provides two hooks:

  • hook_civicrm_post: Runs immediately after the change is sent to the DB. If there's a SQL transaction, then it runs within the transaction.
  • hook_civicrm_postCommit: Runs after the change is committed to the DB. Runs outside of any SQL transactions.

My theory is that more developers would run their logic at the correct time if they had postCommit available as a hook (and, of course, the downstream code would look tidier). This isn't really a pain-point for me, so I'm not super-motivated to push it, and I haven't looked very hard for systemic side-effects of buffering more posts. However, I think it could provide better DX (making it easier for more folks to get the right timing), so I wanted to share the POC.

Before

/**
 * Hook fired after the INSERT/UPDATE is sent to the DB
 */
function hook_civicrm_post($op, $objectName, $objectId, &$objectRef)

After

/**
 * Hook fired after the INSERT/UPDATE is sent to the DB
 */
function hook_civicrm_post($op, $objectName, $objectId, &$objectRef);

/**
 * Hook fired after the record is committed to the DB.
 * This may be immediate (for non-transactional work) or it may be
 * delayed a few milliseconds (for transactional work).
 */
function hook_civicrm_postCommit($op, $objectName, $objectId, $objectRef);

Wednesday 2020-04-01 19:10:04 by Steve Kelly

Add files via upload

In this lesson, Mike Albert taught me Steve Kelly about recursion using string reversal as an example. This lesson used Python as the language of choice. This lesson lasted over 2 hours, with plenty of comments, questions, and answers to back up the understanding. Without Mike's help, I would have been confused about why this worked. Mike explained this concept to me thoroughly and in a well-versed manner. Mike is a great teacher, helped solidify my understanding in not just recursion, but iteration, reversing of a string, as well as other concepts I have forgotten over the years.. I have never encountered recursion before, not even as my brief stint as a Computer Science Major at the University of New Hampshire. Mike Albert is a graduate of the Rensselaer Polytechnic Institute with a dual B.S. in Computer Engineering and Computer Science. I am proud to call Mike Albert my friend and brother.


Wednesday 2020-04-01 22:29:13 by Barret Rhoden

iommu: rewrite device assignment

This was a mess. The data structures and connections between processes, pci_devices, and iommus were all a mess. The pci device assignment wasn't integrated with the iommu, etc.

The main thing is that processes now have lists of their iommus, which will make things less painful when we do an IOTLB flush. It's still painful, but not as bad.

All of the assignment stuff was changed, so now you call the pci functions, e.g. pci_assign_device(), which will internally deal with the iommu.

When processes are destroyed, we tear down any assigned device. In my current code, all in-kernel device users hold the pdev qlock. For example, when #cbdma tries to use the IOAT driver, it qlocks the device. I think we'd be OK if the device was unassigned out from under the driver, in that the IOMMU would protect us from any wild writes, but the kernel driver would likely get wedged.

I have a bunch of changes to #cbdma, but the easiest thing right now is to just not build it briefly.

Signed-off-by: Barret Rhoden [email protected]


Wednesday 2020-04-01 22:38:39 by Hamraj Rai

[Update] Shit Ton Of Level Data Fixes and Other Dumb Shit

yuh yuh yuh fuck we me you know i got it


Wednesday 2020-04-01 23:53:33 by zestyhamburger

Revert "FUCK YOU VSCODE but i still love you"

This reverts commit 8ab7054f5edfac28be245326e492fcaaa6b2c827.


< 2020-04-01 >