Skip to content

Latest commit

 

History

History
739 lines (588 loc) · 31.8 KB

2021-09-23.md

File metadata and controls

739 lines (588 loc) · 31.8 KB

< 2021-09-23 >

3,138,824 events, 1,606,149 push events, 2,528,690 commit messages, 197,196,367 characters

Thursday 2021-09-23 03:16:30 by Elbong Gearny

Changes to npm scripts: npm start + npm run run-once

[Irrelevant sidenote: I wish that npm had a default behavior of assuming you meant run x if you feed it a command that isn't part of its built-in library. In other words, I should be able to enter npm run-once and have it, upon seeing that such a script exists, trigger that script. 😤]

Highly Relevant Notes

Requirements

As I'll elaborate on in a moment, this is a highly idiosyncratic addition, largely intended to increase my current productivity with some thought — but not much! — put into portability. I think this setup should work

Fish Shell

You don't have to switch to fish as your primary shell, but it's harmless to install, it's really just another incredible free tool you can shove into the Mary Poppins Bag that is your dev system, just in case. Who knows, maybe you'll just give it a go some time…having installed it for notion2svelte, you'll type fish into your terminal, find that it's got some lovely defaults, and end up marrying it in the Church of Default Shell. But for now, you're probably just wondering why I chose to use fish in this watch script, right?

Why, indeed. Honestly, I'm just lazy, so I'm using my default shell. If you're reading this* and thing it should work differently, let me know and we'll talk.

entr

This is the simplest file-watching tool I could find and it worked more or less on the first try. ¯_(ツ)_/¯

In short…once you install entr and [install fish], npm run watch should allow you to make changes to the pipeline and see them instantly reflected in your running svelte app. As a bonus, entr lets you hit to manually trigger the script if, for instance, you've made changes to a Notion page and so you want to re-pull ready-to-publish pages. It doesn't sound cool when I describe it, but it's crazy magic yumminess in practice!🍦


*HOW!? I haven't made this repo public yet, so either you've hacked my shit (curse you!!!) or…you're from (hushed tones) The Future! How is it? Still no flying cars, I'm guessing? Ah, well. You can't win 'em all.


Thursday 2021-09-23 03:56:36 by Sourajit Karmakar

mata: Add EXPENSIVE_RENDERING hints for GPU

To start off, mata had a pretty shit kernel. This kernel neither used efficient frequencies, nor had great battery life not to mention the horrible performance. This was fine because most users never gave a fuck and I (the only guy dailying mata and tinkering with this stuff) just never had a pleasant experience with it.

Looking at the new Kawase blur implementation added to Android 11, I couldn't help but want it ASAP. However, the kernel just wouldn't cooperate (apparently). Anay wanted me to rebase the kernel because, "our kernel visibly didn't respond to GPU boost hints triggered by the surfaceflinger from rendering expensive blur."

Well after two lengthy kernel rebases (albeit useful ones as I was able to eliminate a LOT OF JUNK), here we fucking go — it was never the kernel my genius man xD. Power hint changes taken from 1.

  • While at it, let's boost the GPU's minimum frequencies to the 515MHz (basically the third step from the maximum frequency step available to the Adreno 540), to further help this old 10nm chad render that beastly but gorgeous looking blur.

Change-Id: I8f72e68873ea46b8b7a562e5d292422d602cf42d


Thursday 2021-09-23 08:40:35 by Peter Zijlstra

sched/core: Fix ttwu() race

Paul reported rcutorture occasionally hitting a NULL deref:

sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM

Debugging showed that this only appears to happen when we take the new code-path from commit:

2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")

and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:

c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")

this would've unconditionally hit:

smp_cond_load_acquire(&p->on_cpu, !VAL);

and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.

The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.

Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.

The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):

				X->cpu = 1
				rq(1)->curr = X

CPU0				CPU1				CPU2

				// switch away from X
				LOCK rq(1)->lock
				smp_mb__after_spinlock
				dequeue_task(X)
				  X->on_rq = 9
				switch_to(Z)
				  X->on_cpu = 0
				UNLOCK rq(1)->lock

								// migrate X to cpu 0
								LOCK rq(1)->lock
								dequeue_task(X)
								set_task_cpu(X, 0)
								  X->cpu = 0
								UNLOCK rq(1)->lock

								LOCK rq(0)->lock
								enqueue_task(X)
								  X->on_rq = 1
								UNLOCK rq(0)->lock

// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
  X->on_cpu = 1
UNLOCK rq(0)->lock

// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb();			// wake X
				ttwu()
				  LOCK X->pi_lock
				  smp_mb__after_spinlock

				  if (p->state)

				  cpu = X->cpu; // =? 1

				  smp_rmb()

// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
  X->on_rq = 0

				  if (p->on_rq)

				  smp_rmb();

				  if (p->on_cpu && ttwu_queue_wakelist(..)) [*]

				  smp_cond_load_acquire(&p->on_cpu, !VAL)

				  cpu = select_task_rq(X, X->wake_cpu, ...)
				  if (X->cpu != cpu)
switch_to(Y)
  X->on_cpu = 0
UNLOCK rq(0)->lock

However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.

(Most of the previous ttwu() races were found on very large PowerPC)

Nevertheless, this fully explains the observed failure case.

Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Prashant-1695 [email protected]


Thursday 2021-09-23 10:09:45 by Joonsoo Kim

mm/page_alloc: use ac->high_zoneidx for classzone_idx

Patch series "integrate classzone_idx and high_zoneidx", v5.

This patchset is followup of the problem reported and discussed two years ago [1, 2]. The problem this patchset solves is related to the classzone_idx on the NUMA system. It causes a problem when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes.

This problem was reported two years ago, and, at that time, the solution got general agreements 2. But it was not upstreamed.

This patch (of 2):

Currently, we use classzone_idx to calculate lowmem reserve proetection for an allocation request. This classzone_idx causes a problem on NUMA systems when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes.

Before further explanation, I should first clarify how to compute the classzone_idx and the high_zoneidx.

  • ac->high_zoneidx is computed via the arcane gfp_zone(gfp_mask) and represents the index of the highest zone the allocation can use

  • classzone_idx was supposed to be the index of the highest zone on the local node that the allocation can use, that is actually available in the system

Think about following example. Node 0 has 4 populated zone, DMA/DMA32/NORMAL/MOVABLE. Node 1 has 1 populated zone, NORMAL. Some zones, such as MOVABLE, doesn't exist on node 1 and this makes following difference.

Assume that there is an allocation request whose gfp_zone(gfp_mask) is the zone, MOVABLE. Then, it's high_zoneidx is 3. If this allocation is initiated on node 0, it's classzone_idx is 3 since actually available/usable zone on local (node 0) is MOVABLE. If this allocation is initiated on node 1, it's classzone_idx is 2 since actually available/usable zone on local (node 1) is NORMAL.

You can see that classzone_idx of the allocation request are different according to their starting node, even if their high_zoneidx is the same.

Think more about these two allocation requests. If they are processed on local, there is no problem. However, if allocation is initiated on node 1 are processed on remote, in this example, at the NORMAL zone on node 0, due to memory shortage, problem occurs. Their different classzone_idx leads to different lowmem reserve and then different min watermark. See the following example.

root@ubuntu:/sys/devices/system/memory# cat /proc/zoneinfo Node 0, zone DMA per-node stats ... pages free 3965 min 5 low 8 high 11 spanned 4095 present 3998 managed 3977 protection: (0, 2961, 4928, 5440) ... Node 0, zone DMA32 pages free 757955 min 1129 low 1887 high 2645 spanned 1044480 present 782303 managed 758116 protection: (0, 0, 1967, 2479) ... Node 0, zone Normal pages free 459806 min 750 low 1253 high 1756 spanned 524288 present 524288 managed 503620 protection: (0, 0, 0, 4096) ... Node 0, zone Movable pages free 130759 min 195 low 326 high 457 spanned 1966079 present 131072 managed 131072 protection: (0, 0, 0, 0) ... Node 1, zone DMA pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone DMA32 pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone Normal per-node stats ... pages free 233277 min 383 low 640 high 897 spanned 262144 present 262144 managed 257744 protection: (0, 0, 0, 0) ... Node 1, zone Movable pages free 0 min 0 low 0 high 0 spanned 262144 present 0 managed 0 protection: (0, 0, 0, 0)

  • static min watermark for the NORMAL zone on node 0 is 750.

  • lowmem reserve for the request with classzone idx 3 at the NORMAL on node 0 is 4096.

  • lowmem reserve for the request with classzone idx 2 at the NORMAL on node 0 is 0.

So, overall min watermark is: allocation initiated on node 0 (classzone_idx 3): 750 + 4096 = 4846 allocation initiated on node 1 (classzone_idx 2): 750 + 0 = 750

Allocation initiated on node 1 will have some precedence than allocation initiated on node 0 because min watermark of the former allocation is lower than the other. So, allocation initiated on node 1 could succeed on node 0 when allocation initiated on node 0 could not, and, this could cause too many numa_miss allocation. Then, performance could be downgraded.

Recently, there was a regression report about this problem on CMA patches since CMA memory are placed in ZONE_MOVABLE by those patches. I checked that problem is disappeared with this fix that uses high_zoneidx for classzone_idx.

http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop

Using high_zoneidx for classzone_idx is more consistent way than previous approach because system's memory layout doesn't affect anything to it. With this patch, both classzone_idx on above example will be 3 so will have the same min watermark.

allocation initiated on node 0: 750 + 4096 = 4846 allocation initiated on node 1: 750 + 4096 = 4846

One could wonder if there is a side effect that allocation initiated on node 1 will use higher bar when allocation is handled on local since classzone_idx could be higher than before. It will not happen because the zone without managed page doesn't contributes lowmem_reserve at all.

Reported-by: Ye Xiaolong [email protected] Signed-off-by: Joonsoo Kim [email protected] Signed-off-by: Andrew Morton [email protected] Tested-by: Ye Xiaolong [email protected] Reviewed-by: Baoquan He [email protected] Acked-by: Vlastimil Babka [email protected] Acked-by: David Rientjes [email protected] Cc: Johannes Weiner [email protected] Cc: Michal Hocko [email protected] Cc: Minchan Kim [email protected] Cc: Mel Gorman [email protected] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: celtare21 [email protected] Signed-off-by: Carlos Jimenez (JavaShin-X) [email protected]


Thursday 2021-09-23 10:53:53 by Billy Einkamerer

Created Text For URL [www.timeslive.co.za/tshisa-live/tshisa-live/2021-09-23-watch--levels-faith-nketsis-boyfriend-gifted-her-this-fancy-range-rover-for-their-anniversary/]


Thursday 2021-09-23 11:29:35 by call girl in mumbai

call girl in mumbai

Mumbai Escort is a distinctive city known for sky-rise plan and stunning female Mumbai Call Girls Service. Mumbai goes with are famous worldwide for giving elite escort organizations. Here is a Mumbai escort office, my escort Mumbai that presents to you the most exquisite, gifted, provocative, stunning MUMBAI CALL GIRLS who are reliably at your organization. my escorts Mumbai has been serving people in Mumbai for more than 10 years and has been recorded as one of the presentations MUMBAI CALL GIRL SERVICE.

https://www.mumbaiescortnight.com/kalyan-escorts-call-girls-service-in-kalyan.html


Thursday 2021-09-23 11:40:11 by Hugh Dickins

mm: put_and_wait_on_page_locked() while page is migrated

Waiting on a page migration entry has used wait_on_page_locked() all along since 2006: but you cannot safely wait_on_page_locked() without holding a reference to the page, and that extra reference is enough to make migrate_page_move_mapping() fail with -EAGAIN, when a racing task faults on the entry before migrate_page_move_mapping() gets there.

And that failure is retried nine times, amplifying the pain when trying to migrate a popular page. With a single persistent faulter, migration sometimes succeeds; with two or three concurrent faulters, success becomes much less likely (and the more the page was mapped, the worse the overhead of unmapping and remapping it on each try).

This is especially a problem for memory offlining, where the outer level retries forever (or until terminated from userspace), because a heavy refault workload can trigger an endless loop of migration failures. wait_on_page_locked() is the wrong tool for the job.

David Herrmann (but was he the first?) noticed this issue in 2014: https://marc.info/?l=linux-mm&m=140110465608116&w=2

Tim Chen started a thread in August 2017 which appears relevant: https://marc.info/?l=linux-mm&m=150275941014915&w=2 where Kan Liang went on to implicate __migration_entry_wait(): https://marc.info/?l=linux-mm&m=150300268411980&w=2 and the thread ended up with the v4.14 commits: 2554db916586 ("sched/wait: Break up long wake list walk") 11a19c7b099f ("sched/wait: Introduce wakeup boomark in wake_up_page_bit")

Baoquan He reported "Memory hotplug softlock issue" 14 November 2018: https://marc.info/?l=linux-mm&m=154217936431300&w=2

We have all assumed that it is essential to hold a page reference while waiting on a page lock: partly to guarantee that there is still a struct page when MEMORY_HOTREMOVE is configured, but also to protect against reuse of the struct page going to someone who then holds the page locked indefinitely, when the waiter can reasonably expect timely unlocking.

But in fact, so long as wait_on_page_bit_common() does the put_page(), and is careful not to rely on struct page contents thereafter, there is no need to hold a reference to the page while waiting on it. That does mean that this case cannot go back through the loop: but that's fine for the page migration case, and even if used more widely, is limited by the "Stop walking if it's locked" optimization in wake_page_function().

Add interface put_and_wait_on_page_locked() to do this, using "behavior" enum in place of "lock" arg to wait_on_page_bit_common() to implement it. No interruptible or killable variant needed yet, but they might follow: I have a vague notion that reporting -EINTR should take precedence over return from wait_on_page_bit_common() without knowing the page state, so arrange it accordingly - but that may be nothing but pedantic.

__migration_entry_wait() still has to take a brief reference to the page, prior to calling put_and_wait_on_page_locked(): but now that it is dropped before waiting, the chance of impeding page migration is very much reduced. Should we perhaps disable preemption across this?

shrink_page_list()'s __ClearPageLocked(): that was a surprise! This survived a lot of testing before that showed up. PageWaiters may have been set by wait_on_page_bit_common(), and the reference dropped, just before shrink_page_list() succeeds in freezing its last page reference: in such a case, unlock_page() must be used. Follow the suggestion from Michal Hocko, just revert a978d6f52106 ("mm: unlockless reclaim") now: that optimization predates PageWaiters, and won't buy much these days; but we can reinstate it for the !PageWaiters case if anyone notices.

It does raise the question: should vmscan.c's is_page_cache_freeable() and __remove_mapping() now treat a PageWaiters page as if an extra reference were held? Perhaps, but I don't think it matters much, since shrink_page_list() already had to win its trylock_page(), so waiters are not very common there: I noticed no difference when trying the bigger change, and it's surely not needed while put_and_wait_on_page_locked() is only used for page migration.

[[email protected]: add put_and_wait_on_page_locked() kerneldoc] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Hugh Dickins [email protected] Reported-by: Baoquan He [email protected] Tested-by: Baoquan He [email protected] Reviewed-by: Andrea Arcangeli [email protected] Acked-by: Michal Hocko [email protected] Acked-by: Linus Torvalds [email protected] Acked-by: Vlastimil Babka [email protected] Cc: Matthew Wilcox [email protected] Cc: Baoquan He [email protected] Cc: David Hildenbrand [email protected] Cc: Mel Gorman [email protected] Cc: David Herrmann [email protected] Cc: Tim Chen [email protected] Cc: Kan Liang [email protected] Cc: Andi Kleen [email protected] Cc: Davidlohr Bueso [email protected] Cc: Peter Zijlstra [email protected] Cc: Christoph Lameter [email protected] Cc: Nick Piggin [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Change-Id: I3e95f927445a686b02c7a8d9932a2acdc9a1baf5 [[email protected]]: Fixed trivial merge conflicts Git-Commit: 9a1ea439b16b92002e0a6fceebc5d1794906e297 Git-Repo: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Signed-off-by: Charan Teja Reddy [email protected] (cherry picked from commit 2719bee9e573eaf37e51786d689f933c305ae325) (cherry picked from commit 0e9971bf2f237ece211a329868426f43a62c2a1d)


Thursday 2021-09-23 11:41:56 by TanapTheTimid

fuck you heroku. enabled address sanitizer on docker file.


Thursday 2021-09-23 15:55:09 by ETNA-OV-DV

make it readable

you're such a bad skid literally try harder improved speed by 3000%!!! completely untested because fuck you


Thursday 2021-09-23 16:53:19 by Taliesynth

forgot the summary, i hate my life

apparently this is why it didnt push correctly. but yeah this is the whole project.


Thursday 2021-09-23 17:35:32 by Tovape

Added Image Loading

God Fucking Christ, really wasted 4 hours for this crap, damnitall


Thursday 2021-09-23 21:19:11 by Dave Hansen

mm/mempolicy: add MPOL_PREFERRED_MANY for multiple preferred nodes

Patch series "Introduce multi-preference mempolicy", v7.

This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy. This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2) interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a preference for nodes which will fulfil memory allocation requests. Unlike the MPOL_PREFERRED mode, it takes a set of nodes. Like the MPOL_BIND interface, it works over a set of nodes. Unlike MPOL_BIND, it will not cause a SIGSEGV or invoke the OOM killer if those preferred nodes are not available.

Along with these patches are patches for libnuma, numactl, numademo, and memhog. They still need some polish, but can be found here: https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many It allows new usage: numactl -P 0,3,4

The goal of the new mode is to enable some use-cases when using tiered memory usage models which I've lovingly named.

1a. The Hare - The interconnect is fast enough to meet bandwidth and latency requirements allowing preference to be given to all nodes with "fast" memory. 1b. The Indiscriminate Hare - An application knows it wants fast memory (or perhaps slow memory), but doesn't care which node it runs on. The application can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator, etc). This reverses the nodes are chosen today where the kernel attempts to use local memory to the CPU whenever possible. This will attempt to use the local accelerator to the memory. 2. The Tortoise - The administrator (or the application itself) is aware it only needs slow memory, and so can prefer that.

Much of this is almost achievable with the bind interface, but the bind interface suffers from an inability to fallback to another set of nodes if binding fails to all nodes in the nodemask.

Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the preference.

/* Set first two nodes as preferred in an 8 node system. */ const unsigned long nodes = 0x3 set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);

/* Mimic interleave policy, but have fallback *. const unsigned long nodes = 0xaa set_mempolicy(MPOL_PREFER_MANY, &nodes, 8);

Some internal discussion took place around the interface. There are two alternatives which we have discussed, plus one I stuck in:

  1. Ordered list of nodes. Currently it's believed that the added complexity is nod needed for expected usecases.
  2. A flag for bind to allow falling back to other nodes. This confuses the notion of binding and is less flexible than the current solution.
  3. Create flags or new modes that helps with some ordering. This offers both a friendlier API as well as a solution for more customized usage. It's unknown if it's worth the complexity to support this. Here is sample code for how this might work:

// Prefer specific nodes for some something wacky set_mempolicy(MPOL_PREFER_MANY, 0x17c, 1024);

// Default set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0); // which is the same as set_mempolicy(MPOL_DEFAULT, NULL, 0);

// The Hare set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0);

// The Tortoise set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0);

// Prefer the fast memory of the first two sockets set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2);

This patch (of 5):

The NUMA APIs currently allow passing in a "preferred node" as a single bit set in a nodemask. If more than one bit it set, bits after the first are ignored.

This single node is generally OK for location-based NUMA where memory being allocated will eventually be operated on by a single CPU. However, in systems with multiple memory types, folks want to target a type of memory instead of a location. For instance, someone might want some high-bandwidth memory but do not care about the CPU next to which it is allocated. Or, they want a cheap, high capacity allocation and want to target all NUMA nodes which have persistent memory in volatile mode. In both of these cases, the application wants to target a set of nodes, but does not want strict MPOL_BIND behavior as that could lead to OOM killer or SIGSEGV.

So add MPOL_PREFERRED_MANY policy to support the multiple preferred nodes requirement. This is not a pie-in-the-sky dream for an API. This was a response to a specific ask of more than one group at Intel. Specifically:

  1. There are existing libraries that target memory types such as https://github.com/memkind/memkind. These are known to suffer from SIGSEGV's when memory is low on targeted memory "kinds" that span more than one node. The MCDRAM on a Xeon Phi in "Cluster on Die" mode is an example of this.

  2. Volatile-use persistent memory users want to have a memory policy which is targeted at either "cheap and slow" (PMEM) or "expensive and fast" (DRAM). However, they do not want to experience allocation failures when the targeted type is unavailable.

  3. Allocate-then-run. Generally, we let the process scheduler decide on which physical CPU to run a task. That location provides a default allocation policy, and memory availability is not generally considered when placing tasks. For situations where memory is valuable and constrained, some users want to allocate memory first, then allocate close compute resources to the allocation. This is the reverse of the normal (CPU) model. Accelerators such as GPUs that operate on core-mm-managed memory are interested in this model.

A check is added in sanitize_mpol_flags() to not permit 'prefer_many' policy to be used for now, and will be removed in later patch after all implementations for 'prefer_many' are ready, as suggested by Michal Hocko.

[[email protected]: suggest to refine policy_node/policy_nodemask handling]

Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Co-developed-by: Ben Widawsky [email protected] Signed-off-by: Ben Widawsky [email protected] Signed-off-by: Dave Hansen [email protected] Signed-off-by: Feng Tang [email protected] Cc: Michal Hocko [email protected] Acked-by: Michal Hocko [email protected] Cc: Andrea Arcangeli [email protected] Cc: Mel Gorman [email protected] Cc: Mike Kravetz [email protected] Cc: Randy Dunlap [email protected] Cc: Vlastimil Babka [email protected] Cc: Andi Kleen [email protected] Cc: Dan Williams [email protected] Cc: Huang Ying [email protected]b Cc: Michal Hocko [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-09-23 23:33:45 by Scott Lamb

shutdown better

After a frustrating search for a suitable channel to use for shutdown (tokio::sync::watch::Receiver and futures::future::Sharedtokio::sync::oneshot::Receiver didn't look quite right) in which I rethought my life decisions, I finally just made my own (server/base/shutdown.rs). We can easily poll it or wait for it in async or sync contexts. Most importantly, it's convenient; not that it really matters here, but it's also efficient.

We now do a slightly better job of propagating a "graceful" shutdown signal, and this channel will give us tools to improve it over time.

  • Shut down even when writer or syncer operations are stuck. Fixes #117
  • Not done yet: streamers should instantly shut down without waiting for a connection attempt or frame or something. I'll probably implement that when removing --rtsp-library=ffmpeg. The code should be cleaner then.
  • Not done yet: fix a couple places that sleep for up to a second when they could shut down immediately. I just need to do the plumbing for mock clocks to work.

I also implemented an immediate shutdown mode, activated by a second signal. I think this will mitigate the streamer wait situation.


< 2021-09-23 >