1,815,184 events recorded by gharchive.org of which 1,815,184 were push events containing 2,905,253 commit messages that amount to 215,762,167 characters filtered with words.py@e23d022007... to these 35 messages:
God damn another filter fix. This shit is complicated
Update Payback2_CHEATus to 2.1.9
- GROUNDBREAKING: .conf file is now generated in beautified way, config reader able to read older/new config and save it as new config.
- god modes menu message shortened.
- god mode weapon ammo now added to list only if freeze is enabled.
- god mode Rel0ad and Rel0ad grenade modifying mechanic combined.
- NoStealCar,Immortality,ImmortalitySelfXplod modifying mechanic combined.
- replaced 0x0... to 0x...
- wallhack options cleanup.
- removed custom value in wallhack (reason: useless, wont make anything other than make you fall to void).
- floodspawn now added activate v2 (because im TOO ANNONYED when i want to activate it but alot of non-stop killing happening. kinda similiar to joker in terms of searching).
- changed some option to add 20ms option in particle interval modifier.
- Added
log()
function (will only print simple debugging info if cfg.enableLogging is set to true) - Remove deprecated configurations.
- findEntityAnchor MangyuFloatAnchor searching method improved.
- now will check for 508/d08 hexadecimal memory addresses before continuing to make sure its the right address.
- Change "mangyuFloatAnchor" > "abjAutoAnchor"
- Changed "abuse" to a more moderate "harm" (thanks JokerGGS for idea hehe).
- Changed some "DISCLAIMMER" to a shorter "WARNING".
- Replaced "---" to literal dash (——, saves 1 byte, only changed if its not comment).
- Combined Other menus, void mode, match Weapon Ammo cheat into one Match Modifier
- Changed
cfg.cheatSettings.findEntityAnchr.searchMethod
tocfg.entityAnchrSearchMethod
. - now
cfg.entityAnchrSearchMethod
uses "abjAutoAnchor" by default. - bugfix godmode randomly searching number after selecting back and cancel popup.
- Deprecated Mangyu's Big Body cheat (the reason is again, not working).
- changed number typos in god modes.
- Wall hack AGH search numbers improved (will get more accurate result, able to turn off without memoization, and applied optimized group search).
- Wall hack GKTV fixed (more 'stable' and accurate results, and bugfixing veichle wallhack wont work, able to turn off without memoization).
- now will make sure some numbers that should be nil'd will be nil'd by putting it on end of function
- findEntityAnchor wordWeaponAmmo searching methods slightly improved (now search anchor uses DWORD instead of WORD, and searching in smaller ranges)
- Drift speed bug fix (now works and get accurate results too)
- more backend stuff...
- and even more i cant even put here lol
[MIRROR] Updates Maps And Away Missions MD [MDB IGNORE] (#13095)
-
Updates Maps And Away Missions MD (#66455)
-
Updates Maps And Away Missions MD
Hey there,
This was outdated for a bit, so I decided to pretty it up and make a few things a bit more explicit.
I alphabetized the maps since we don't really prioritize one-over-the-other (except Meta now being the default map instead of the non-existent Box).
I also alphabetized Removed Station Maps, and removed the "outdated" (they are all outdated, or will definitely all be outdated by the time a reader reads this).
I elaborated a bit more on how station maps are loaded these days (correct me if I am wrong).
Standardized how we show code paths.
Gave explicit instructions on never using Dream Maker to map, and linking two programs that we tell anyone who wanders in on the Discord to use anyways (please do inform me if we should not do this- but Dream Maker just fucking sucks shit).
I also fixed up some language around the Away Missions part, and added a newer section for the Map Depot since I do not believe it is discussed elsewhere on the main repository (as well as a short warning on anyone who things they can get Phobos or something running out-of-the-box).
Alright, cool.
- Updates Maps And Away Missions MD
Co-authored-by: san7890 [email protected]
Update 95 files @ Wed, 27 Apr 2022 02:12:30 GMT This site update changes karen-archive.html, case-directory.html, thealienalliance.html, thebutterflysoldiers.html, schedule.html, youreinthewrongpartoftown.html, aretheyalium.html, logs.html, images.html, theoracle.html, michaeljackson.html, logs.html, dontpush.html, c-8-dickpicks.html, user.html, 914222-1952425161182025.html, zeusandfriends.html, thebannerborn.html, overview.html, reviews.html, dashboard.html, MatPat.html, idkbro.html, thecaptain.html, thedude.html, oopsalbangers.html, 914222-20513165181205.html, inhumanresources.html, 914222-195242514914101.html, ijustmadeyoulookunderthere.html, the-d_.html, user8118151241161251919.html, iinhumanresources.html, terfwar.html, departments.html, captainwhyareyoudoingthistome.html, moriarty.html, 914222-3116201914.html, 914222-3421144920.html, auth.html, cases.html, usa-home.html, .html, dont-push.html, LixianTV.html, employeeaccountabilitytimesignature-portal.html, cart.html, Jupiterandfriends.html, agentloginportal.html, the-asshats.html, illinois.html, corn-dm.html, thejackson5.html, gettingjiggywithit.html, legal.html, logout.html, girlofthe21stcentury.html, capitalism.html, MichaelJackson.html, fuckem.html, NerdFiction.html, shatteredbysomeone.html, terrorblycute.html, everyone.html, messenger.html, 914222-29151493191121139.html, evidence.html, evidence.html, woahhh.html, ohyouwouldlikethatwouldntyou.html, redacted.html, agencydirectory.html, karen6803.html, karen6804.html, 914222-5241612154525.html, lolgotyou.html, thefounders.html, sexygoldarms.html, the-baboonies.html, 2702-invincible2syndicate.html, case-directory.html, Imsorrymissjackson.html, 914222-1916151820192112124214513114.html, nebuchadnezzar.html, karen6816.html, cachow.html, karen6815.html, helpcenter.html, kriskringle.html, byebyebye.html, thekilleidoscope.html, para.docs-portal, suspect-hierarchy.html, Rad_R.html, karen6809.html
Account Settings page (mali pa js)
I walked through the door with you The air was cold But something about it felt like home somehow And I, left my scarf there at your sister's house And you've still got it in your drawer even now Oh, your sweet disposition And my wide-eyed gaze We're singing in the car, getting lost upstate Autumn leaves falling down like pieces into place And I can picture it after all these days And I know it's long gone and that magic's not here no more And I might be okay but I'm not fine at all 'Cause there we are again on that little town street You almost ran the red 'cause you were lookin' over at me Wind in my hair, I was there I remember it all too well Photo album on the counter Your cheeks were turning red You used to be a little kid with glasses in a twin-sized bed And your mother's telling stories 'bout you on the tee-ball team You told me 'bout your past thinking your future was me And I know it's long gone and there was nothing else I could do And I forget about you long enough to forget why I needed to 'Cause there we are again in the middle of the night We're dancing 'round the kitchen in the refrigerator light Down the stairs, I was there I remember it all too well, yeah And maybe we got lost in translation Maybe I asked for too much But maybe this thing was a masterpiece 'til you tore it all up Running scared, I was there I remember it all too well And you call me up again just to break me like a promise So casually cruel in the name of being honest I'm a crumpled up piece of paper lying here 'Cause I remember it all, all, all Too well Time won't fly, it's like I'm paralyzed by it I'd like to be my old self again But I'm still trying to find it After plaid shirt days and nights when you made me your own Now you mail back my things and I walk home alone But you keep my old scarf from that very first week 'Cause it reminds you of innocence And it smells like me You can't get rid of it 'Cause you remember it all too well, yeah 'Cause there we are again when I loved you so Back before you lost the one real thing you've ever known It was rare, I was there, I remember it all too well Wind in my hair, you were there, you remember it all Down the stairs, you were there, you remember it all It was rare, I was there, I remember it all too well
beyond the blue bird up hill to the metal catedral block where evil spirits play the organ to the volcano god at midnight ...
public: Beautify URLs
At a high level, this commit takes the URL of every result, splits it into a protocol, host, and pathname (pathname is split, too) and places rightChevron icons beside them. This makes the URLs much easier to digest and, in my opinion, makes everything so much better.
Throughout the code, you'll notice a few comments related to the functionality here. This was an... open-ended feature and the way I took it may not be ideal; clearly, however, I realized this and have identified those areas where functionality can be altered to tweak the URL output.
Right now, it defaults to a 25 character length on the host and paths before truncating them. It also slices the paths to a max of 3, as that is about how much space we can handle.
The determinePathTrimLength
function is important here, as it dictates
how long each path should be. Essentially, if the previous item (host
as 0
) is longer than 25, it will trim the current path to 8.
In practice, this means that if we have a really long URL like https://999999999999999999999999999999.com/999999999999999999999999999999/999999999999999999999999999999/999999999999999999999999999999, it will only show the very second path as a 25 character trim. This means no URL is too long to handle, so we won't get any weird layout errors.
My focus here was precisely that: to handle any length of URL while still displaying it "nicely." As such, the functionality of certain functions may not be focused towards informational output, but visual output.
As a fallback to all of this, I added the title="" attribute to the link so that one can always see the full URL.
To do this, I had to modify ExternalLink to accept the standard React.AnchorHTMLAttributes props to allow the title attribute implicitly.
In addition to these changes, I added word-break: break-all;
to the
Description and Context, so that we won't get weird layout issues from
those, either.
haha what if we fundamentally didn't understand inheritance wouldn't that be fucking hilarious
magit-section-context-menu: Support non-section branches
We use section keymaps to implement context-sensitive menus but branches are not always represented using sections. To support such "painted branches" anyway use fake sections, which closely mirror the commit section in which the click occurred.
This admittedly is ugly and somewhat risky, but seems to work well. `magit-section-update-highlight' would break due to this hack, so we avoid calling it. If it turns out that things also break due to this kludge, then we might have to revert.
feat: Implement QBCore.Shared.VehicleHashs
Describe Pull request Indexs the vehicles jenkins joaat(Hash) value as the key of the table as the key so we dont have to do some shitty ass loop through the vehicles comparing joaat values. I have no clue why this secondary table was removed in the first place if I had to guess people were lazy but this should help the lazys by automatically filling the table.
Questions (please complete the following information): Have you personally loaded this code into an updated qbcore project and checked all it's functionality? [yes/no] (Be honest) Yes
Does your code fit the style guidelines? [yes/no] Yes
Does your PR fit the contribution guidelines? [yes/no] Yes
Almost halves airlock auto close delay (#65349)
We go from a delay of 15 seconds, to 8.
This has cheesed me off for a long time. Airlocks should lock, not just stay open for a quarter of a minute.
This'll help with excited groups that stay permenantly connected at highpop because of a slowed ssair and doors opening and closing constantly
Also effects door chasing. I'm honestly just kinda eyeballing this, it might be a bit much. Even if this goes through could totally be tweaked
Even if this is too low, 15 is way too damn high.
feat: Implement a new orthogonal range search seed finder (#904)
As I said in #901, I have been playing around with seed finding a little bit lately. Last weekend, I mentioned an idea for a new (?) kind of seed finding algorithm based on range search datastructures, and this is the very, very first semi-working implementation of it, just before the weekend.
The idea behind this algorithm is relatively simple. In traditional seedfinding, we check a whole lot of candidate spacepoints to see whether they meet some condition. If you look at this differently, each spacepoint defines a volume in the z-r-φ space, which contains any spacepoints it can form a doublet with. What if we reversed this logic? What if we defined this volume first, and then just extract the spacepoints inside of that space? That way, we can vastly reduce the number of spacepoints we need to look at.
How do we do this quickly? With k-d trees. These data structures are cheap to build, and they give us very fast orthogonal range searches. In other words, we can very quickly look up which of our spacepoints lie within an axis-aligned orthognal n-dimensional hyperrectangle. In this case, which spacepoints lie within a z-r-φ box.
So, the core idea of this seedfinder is to define as many of our seedfinding constraints in orthogonal fashion. That way, we can make our candidate hyperrectangle smaller and smaller. The tighter the constraints we can place, the better. Then, we look up the relevant spacepoints, and we can avoid looking at any others. That also means this solution requires no binning whatsoever.
Currently there are quite a few constraints in the code. Here is my status update on how well it is going to convert each of them. In some cases, we can define a weaker version of the constraints in orthogonal fashion. This is still very powerful, and it doesn't actually lose us any efficiency (because we can always check the tighter constraint in a non-orthogonal way later, not a problem)!
Currently, I am not aware of any unary constraints in the Acts seed finding code. That is to say, logic to determine whether a point is allowed to be a lower spacepoint. However, I have the following thoughts about introducing some:
- I believe the binning code does some kind of magic to determine whether a spacepoint can be a lower spacepoint. Since my solution doesn't use any binning, I don't have access to this just yet. However, if we can incorporate this logic it could be very powerful.
- Maximum single-point η: we currently have some checks in place to see if the pseudorapidity of particles is not too high. We could realistically use this maximum pseudorapidity, combined with the collision region range to constrain the bottom spacepoints.
These are the existing binary constraints on spacepoint duplets:
Constraint | Description | Orthogonalization |
---|---|---|
Minimum ∆r | Ensure that the second spacepoint is within a certain difference in radius | Full |
Maximum cot θ | Ensure that the pseudorapidity of the duplet is not too high | Unsuccessful |
z-origin range | Ensure that the duplet would have originated from the collision point | Weakened |
Maximum ∆φ1 | Ensure that the duplet does not bend too much in the x-y plane | Full |
1 This check does not exist explicitly in the existing seed finder, but is implicit in the binning process.
There are a lot of ternary constraints (to check whether a triplet is valid):
Constraint | Description | Orthogonalization |
---|---|---|
Scattering angle | ??? | Unsuccessful |
Helix diameter | Ensure the helix diameter is within some range | In progress |
Impact parameters | Ensure the impact parameters are close to the collision point | In progress |
Monotonic z1 | Ensure that z increases or decreases monotonically between points | Full |
1 This check does not exist in the existing seed finder, check #901.
There are also constraints defined in the experiment-specific cuts, and the seed filter, and in other places. If we could convert some of those to orthogonal constraints the implementation would become much more powerful. However, I don't really understand what is happening in those files just yet. Need more reading.
The current performance of this seedfinder is... Complicated. On my machine, it runs a 4000 π+ event in about 5 seconds, three times slower than the existing seedfinder. Its efficiency is much higher though, and the fake rate is much lower. So that's something. However, that is in part because I am creating far more seed candidates, so take this with a big grain of salt.
There are two ways that I can think of to use this kind of algorithm. The first is an inside-to-outside algorithm, where we pick a lower spacepoint first, check the space it defines for a middle spacepoint, and then check the space the two of them define for a third spacepoint. This algorithm has time complexity 𝒪(n3), and it has space complexity 𝒪(n). Due to the constants, I still believe this implementation can outperform the 𝒪(n2) existing algorithm, however.
The second way would be to construct a set of duplets using this logic, and then to fit those together like we do with traditional seedfinding. This has 𝒪(n2) time complexity like the existing code, but also space complexity 𝒪(n2).
- The implementation of the k-d tree seems to work very well, and it is quite fast.
- Basic seedfinding using this strategy is functional.
- My maximum ∆φ constraint does not cross the 2π boundary yet.
- I used the existing seedfinding algorithm as a stepping stone, which I have completely destroyed in the process. Obviously I do not intend on keeping it that way, and the existing algorithm will be restored to its full glory.
- Lots more.
- Add more constraints, and tighten existing ones.
- Lots of things, pretty much everything. But I really want to go home for the weekend, so I will write this part next week.
branch: new autosetupmerge option 'simple' for matching branches
With the default push.default option, "simple", beginners are protected from accidentally pushing to the "wrong" branch in centralized workflows: if the remote tracking branch they would push to does not have the same name as the local branch, and they try to do a "default push", they get an error and explanation with options.
There is a particular centralized workflow where this often happens: a user branches to a new local topic branch from an existing remote branch, eg with "checkout -b feature1 origin/master". With the default branch.autosetupmerge configuration (value "true"), git will automatically add origin/master as the upstream tracking branch.
When the user pushes with a default "git push", with the intention of pushing their (new) topic branch to the remote, they get an error, and (amongst other things) a suggestion to run "git push origin HEAD".
If they follow this suggestion the push succeeds, but on subsequent default pushes they continue to get an error - so eventually they figure out to add "-u" to change the tracking branch, or they spelunk the push.default config doc as proposed and set it to "current", or some GUI tooling does one or the other of these things for them.
When one of their coworkers later works on the same topic branch, they don't get any of that "weirdness". They just "git checkout feature1" and everything works exactly as they expect, with the shared remote branch set up as remote tracking branch, and push and pull working out of the box.
The "stable state" for this way of working is that local branches have the same-name remote tracking branch (origin/feature1 in this example), and multiple people can work on that remote feature branch at the same time, trusting "git pull" to merge or rebase as required for them to be able to push their interim changes to that same feature branch on that same remote.
(merging from the upstream "master" branch, and merging back to it, are separate more involved processes in this flow).
There is a problem in this flow/way of working, however, which is that the first user, when they first branched from origin/master, ended up with the "wrong" remote tracking branch (different from the stable state). For a while, before they pushed (and maybe longer, if they don't use -u/--set-upstream), their "git pull" wasn't getting other users' changes to the feature branch - it was getting any changes from the remote "master" branch instead (a completely different class of changes!)
An experienced git user might say "well yeah, that's what it means to have the remote tracking branch set to origin/master!" - but the original user above didn't ask to have the remote master branch added as remote tracking branch - that just happened automatically when they branched their feature branch. They didn't necessarily even notice or understand the meaning of the "set up to track 'origin/master'" message when they created the branch - especially if they are using a GUI.
Looking at how to fix this, you might think "OK, so disable auto setup of remote tracking - set branch.autosetupmerge to false" - but that will inconvenience the second user in this story - the one who just wanted to start working on the topic branch. The first and second users swap roles at different points in time of course - they should both have a sane configuration that does the right thing in both situations.
Make this "branches have the same name locally as on the remote" workflow less painful / more obvious by introducing a new branch.autosetupmerge option called "simple", to match the same-name "push.default" option that makes similar assumptions.
This new option automatically sets up tracking in a subset of the current default situations: when the original ref is a remote tracking branch and has the same branch name on the remote (as the new local branch name).
Update the error displayed when the 'push.default=simple' configuration rejects a mismatching-upstream-name default push, to offer this new branch.autosetupmerge option that will prevent this class of error.
With this new configuration, in the example situation above, the first user does not get origin/master set up as the tracking branch for the new local branch. If they "git pull" in their new local-only branch, they get an error explaining there is no upstream branch - which makes sense and is helpful. If they "git push", they get an error explaining how to push and suggesting they specify --set-upstream - which is exactly the right thing to do for them.
This new option is likely not appropriate for users intentionally implementing a "triangular workflow" with a shared upstream tracking branch, that they "git pull" in and a "private" feature branch that they push/force-push to just for remote safe-keeping until they are ready to push up to the shared branch explicitly/separately. Such users are likely to prefer keeping the current default merge.autosetupmerge=true behavior, and change their push.default to "current".
Also extend the existing branch tests with three new cases testing this option - the obvious matching-name and non-matching-name cases, and also a non-matching-ref-type case. The matching-name case needs to temporarily create an independent repo to fetch from, as the general strategy of using the local repo as the remote in these tests precludes locally branching with the same name as in the "remote".
Signed-off-by: Tao Klerks [email protected]
Fixes conveyor runtime (#65788)
Conveyor would runtime whenever it is right clicked with an item
Fixes #64595 (Runtime on conveyor for right clicking)
fixes a runtime with conveyor where right clicking it with any item would cause a runtime
Mothblocks rant from the issue report below, you've been warned:
Because right-clicking in BYOND is horse-shit. It pipes it all through the normal Click and only tells you it's a right-click through a flag. This means that on anything that isn't prepared, right-clicking is the same as left-clicking, which is terrible UX that only exists in SS13.
Nothing should return ..() from attackby_secondary, because the default is the legacy behavior of making right-click pass as left-click (which I want to kill ASAP, once nothing uses the stupid flags anymore).
Remove else return ..(), and make this whole thing do return SECONDARY_ATTACK_CANCEL_ATTACK_CHAIN.
josephstonecapital2022/Taking-a-strategy-first-approach-to-customized-investment-banking-plans-for-clients@e5e6d9c954...
Update README.md
The recent focus of Joseph Stone Capital complaints has been on the lack of quality care that the wounded warriors have, once they leave a medical facility. The full-service insurance brokerage company complains that the military members who have suffered injuries while battling should be eligible for a host of advantages. Joseph Stone Capital complaints that the goal of military medicine should be to salvage the wounded warriors who cannot return to duty. Joseph Stone Capital reviews show that the brokerage thinks that the medical ethics cannot defend devoting scarce resources to the badly injured who cannot return to duty. Joseph Stone Capital complains that there cannot be any moral or logical connection between the right to healthcare, and one’s duty to defend the nation. Joseph Stone Capital complaints that the National Healthcare System should be geared to offer additional healthcare benefits to the wounded soldiers, irrespective of the healthcare received by other citizens. Joseph Stone Capital LLC thinks that the families of casualties suffer in many ways, and should be given assistance to help them thrive. Joseph Stone Capital reviews show that the brokerage focuses on the need to aid the families of the wounded soldiers in ways to help heal their wounds that simply medicine cannot. Donations to help the wounded veterans To meet its commitment to aid the wounded soldiers, Joseph Stone Capital reviews show that the privately held full-service investment brokerage has donated to the Wounded Warriors Family Support, a nonprofit organization which has been given a four-star rating by the Charity Navigator. Wounded Warriors Family Support is on the mission to offer support to the families of the soldiers who have been injured, wounded, or killed during combat operations. By donating to the Wounded Warriors Family Support, the investment banking brokerage has ensured that the wounded soldiers have access to better resources. The privately held full-service investment brokerage considers that donations provide an impactful way to improve the conditions of war veterans, and Joseph Stone Capital reviews show that by donating to the Wounded Warriors Family Support, the investment brokerage has ensured that the soldiers are able to receive the physical and mental support that they need to thrive.
Upholding a culture of powerful philosophies and unique monetary strategies Joseph Stone Capital reviews show that since its inception, the investment brokerage has upheld a culture of powerful philosophies and unique monetary strategies for its clients. The investment banking brokerage focuses on emerging growth companies, and has helped many clients in raising capital and advising on mergers & acquisitions (M&A). The private investment brokerage takes a personalized approach to meet the client’s objectives and goals. Joseph Stone Capital LLC is a member of the Securities Investor Protection Corporation (SIPC), Financial Industry Regulatory Authority (FINRA), and Securities & Exchange Commission (SEC). The wide array of investment services offered by Joseph Stone Capital LLC include portfolio diversification, public offerings and IPOs, bridge loans, fixed income offerings, secondary offerings, reverse mergers, corporate restricting, financial advisory, and a lot more. The full-service investment brokerage seeks companies with innovative technologies and intellectual capital with a strong potential for growth. Committed to meeting the client’s financial goals The brokerage has extensive experience in assisting emerging growth companies to raise capital, and marketing and regulatory expertise. The brokerage takes a strategy-first approach by creating a unique and customized plan for each of its clients, using both technical and fundamental analysis. The investment brokerage works through volatile markets, and builds a strong relationship by committing itself to achieving the client’s financial goals. The company’s knowledgeable team responds quickly to the client’s needs in the fast-changing marketing environment. The brokerage continues to grow its global network, and its registered representatives provide expert knowledge, industry experience, and independent research to the clients. The full-service investment brokerage understands the client’s needs, and works to help them succeed and meet their objectives with a personalized approach. It believes that the success of the clients runs parallel to its own, and continues to strive and exceed the demands of the investment industry with valuable insights and financial guidance for the clients. The full-service investment brokerage continues to exceed the demands of its diversified clients with its valuable insights and expert financial guidance. http://www.josephstonecapital.com/about/index.html
Add files via upload
Flow which generates tweets from the Twitter profile of the company we are interested in and after performing some data cleaning, we perform tone analysis on these tweets so a normal human without any investing knowledge can understand what people around the globe are expressing on social media (to understand if the company is worth making an investment.)
To understand the salient topics that are being discussed in the tweets, we generate a word cloud which shows the most repetitive word in the biggest size and other repetitive words around it.
Displaying the tweets, tone (Joy, Fear, Sadness, Anger, Disgust), and the word cloud in the node-red dashboard (https://newnodeibm.eu-gb.mybluemix.net/ui/#!/0?socketid=j8XOvg2_-J6cIPpEAAIp)
run command
"The method exec(String, String[]) from the type Runtime is deprecated since version 18" fuck you java
Created fucking blockchain (pain-in-the-ass job done)
Created Text For URL [punchng.com/how-god-turned-darkest-times-of-my-life-into-blessing-victony/]
decon: one minute?
fuck that shit, who got time to run, open their laptop, search for usb cable, plug the phone, go to adb shell, and proceed to look at dmesg in one minute? The goddamn Flash?
BACKPORT: mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options
Upstream commit 6471384af2a6530696fc0203bafe4de41a23c9ef.
Patch series "add init_on_alloc/init_on_free boot options", v10.
Provide init_on_alloc and init_on_free boot options.
These are aimed at preventing possible information leaks and making the control-flow bugs that depend on uninitialized values more deterministic.
Enabling either of the options guarantees that the memory returned by the page allocator and SL[AU]B is initialized with zeroes. SLOB allocator isn't supported at the moment, as its emulation of kmem caches complicates handling of SLAB_TYPESAFE_BY_RCU caches correctly.
Enabling init_on_free also guarantees that pages and heap objects are initialized right after they're freed, so it won't be possible to access stale data by using a dangling pointer.
As suggested by Michal Hocko, right now we don't let the heap users to disable initialization for certain allocations. There's not enough evidence that doing so can speed up real-life cases, and introducing ways to opt-out may result in things going out of control.
This patch (of 2):
The new options are needed to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic.
This is expected to be on-by-default on Android and Chrome OS. And it gives the opportunity for anyone else to use it under distros too via the boot args. (The init_on_free feature is regularly requested by folks where memory forensics is included in their threat models.)
init_on_alloc=1 makes the kernel initialize newly allocated pages and heap objects with zeroes. Initialization is done at allocation time at the places where checks for __GFP_ZERO are performed.
init_on_free=1 makes the kernel initialize freed pages and heap objects with zeroes upon their deletion. This helps to ensure sensitive data doesn't leak via use-after-free accesses.
Both init_on_alloc=1 and init_on_free=1 guarantee that the allocator returns zeroed memory. The two exceptions are slab caches with constructors and SLAB_TYPESAFE_BY_RCU flag. Those are never zero-initialized to preserve their semantics.
Both init_on_alloc and init_on_free default to zero, but those defaults can be overridden with CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON.
If either SLUB poisoning or page poisoning is enabled, those options take precedence over init_on_alloc and init_on_free: initialization is only applied to unpoisoned allocations.
Slowdown for the new features compared to init_on_free=0, init_on_alloc=0:
hackbench, init_on_free=1: +7.62% sys time (st.err 0.74%) hackbench, init_on_alloc=1: +7.75% sys time (st.err 2.14%)
Linux build with -j12, init_on_free=1: +8.38% wall time (st.err 0.39%) Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%) Linux build with -j12, init_on_alloc=1: -0.13% wall time (st.err 0.42%) Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%)
The slowdown for init_on_free=0, init_on_alloc=0 compared to the baseline is within the standard error.
The new features are also going to pave the way for hardware memory tagging (e.g. arm64's MTE), which will require both on_alloc and on_free hooks to set the tags for heap objects. With MTE, tagging will have the same cost as memory initialization.
Although init_on_free is rather costly, there are paranoid use-cases where in-memory data lifetime is desired to be minimized. There are various arguments for/against the realism of the associated threat models, but given that we'll need the infrastructure for MTE anyway, and there are people who want wipe-on-free behavior no matter what the performance cost, it seems reasonable to include it in this series.
[[email protected]: v8] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: v9] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: v10] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko [email protected] Acked-by: Kees Cook [email protected] Acked-by: Michal Hocko [email protected] [page and dmapool parts Acked-by: James Morris [email protected]] Cc: Christoph Lameter [email protected] Cc: Masahiro Yamada [email protected] Cc: "Serge E. Hallyn" [email protected] Cc: Nick Desaulniers [email protected] Cc: Kostya Serebryany [email protected] Cc: Dmitry Vyukov [email protected] Cc: Sandeep Patil [email protected] Cc: Laura Abbott [email protected] Cc: Randy Dunlap [email protected] Cc: Jann Horn [email protected] Cc: Mark Rutland [email protected] Cc: Marco Elver [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
Removed the drivers/infiniband/core/uverbs_ioctl.c part, which is not in android-common 4.14 kernel.
Change-Id: I6b5482fcafae89615e1d79879191fb6ce50d56cf Bug: 138435492 Test: Boot cuttlefish with and without Test: CONFIG_INIT_ON_ALLOC_DEFAULT_ON/CONFIG_INIT_ON_FREE_DEFAULT_ON Test: Boot an ARM64 mobile device with and without Test: CONFIG_INIT_ON_ALLOC_DEFAULT_ON/CONFIG_INIT_ON_FREE_DEFAULT_ON Signed-off-by: Alexander Potapenko [email protected]
Add files via upload
Help file for Othernet/Skylark file internal downloads server: HTML accessv6.htm
I changed the name on each version of access.htm because of a clash with cache in Firefox browser bringing up elements of the older versions.This is version #6 accessv6.htm and firestickv6.htm supports internal downloads directory and a new selector for the audio news clips and JPG viewing HTML files, clearing the cache on your browser is a good thing, have found it fixes file loading errors like calling a mp3 a video file and not playing the audio file.
Unzip the package to a folder on Windows computer or Android phone that has access to Skylark/dreamcatcher use the Skylark file manager with admin level login to upload all or just what you need of the package files to the internal "downloads" directory you will also see the message folders and so on. This internal "downloads" directory will be the root directory of your server.Have also found Skylark in admin mode running the file manager can create sub folders and upload files to them to organize your files and not overload the root downloads directory with mixed file types.I have trouble with my Android phone seeing the sub files and folders using Skylark,Operating Skylark file manager from an Android phone Firefox has the best button response but you cannot see the file and folder names because of the screen size, Chrome browser on Android allows you to pull/drag the file names to the far left and the MIME numbers to the right exposing the file and folders, buttons in chrome browser are hard to access I use a small touch stylus to make things a bit better, Skylark works best from a Windows PC, PC is also helpful in putting large files on the othernet secondary memory card for the server.
Use Skylarks file app,select "downloads" in left window, next select upload under the file tab, browse for file to upload select/select to upload file to downloads directory file server,also files can be uploaded to a sub directory if you can select them "hard to do on my Android phone screen it is too small in Skylark".Windows PC running Skylark works much better.
Rename your movie files "m1.mp4 through m6.mp4" put the six movie files in the internal downloads directory.(m1.mp4 example included in server package) Rename your audio files "a1.mp3 through a6.mp3" put the six audio files in the internal downloads directory.(a1.mp3 example included in package)
The way things work now you can fill the media folders with files and easily play them with the file selector/parent directory selector. Have included one JPEG "jpg" screen test photo to simulate other types of selectable files. add sub directories off the downloads directory with Skylark in admin login,make directorys "JPG","PIC","SHO" for jpg pictures that you want to view quickly. new HTML files "JPGshow.htm",PICshow.htm"and "SHOshow.htm files when placed in a proper directory filled with up to 30 ".jpg" picture files will allow you to scroll up an down and quickly see all the images. You will need to name the files “1.jpg” “2.jpg” “3.jpg” and so forth, make sure the extension is in lowercase "jpg" when renaming files, working now with up to 5 meg JPEG picture files scroll up and down to view the picture files.
Test your movies,mp3, pictures with Skylarks file manager, previwer/reader some formats won't play and will lock up your browser.most files off an Andoid phone will play just fine. In Skylark use the Log Viewer to look at the Diagnosistics to view the internal memory card stats and secondary memory card usage.
To force an up to date server file backup to "seconday memory card downloads directory" just press "Apply & Reboot" in the Skylark network app.Needs Admin level login. Note: As of Version#3 and above we are reading the internal news/message files and everything stays up to date :)
Boot up and log into the dream catcher as guest, goto the file manager, internal downloads directory,select "accessv6.htm" and play it with the reader app, Enjoy. You can also create fast direct URL /FS/get/ bookmarks to accessv6.htm and firestick6.htm with any Parent directory click link found on in any full screen message window or use the external SD card link in Roberts (othernet.htm) to produce the direct links for your browsers bookmarks,After you login to skylark as guest use the magic URL “10.0.0.1/FS/get/home:////downloads/” to access your server files.
The 10.0.0.1 address maybe different depending on how you connect this is for hotspot mode. This internal server should work on all versions of the dreamcatcher including the newest ones,Once you get a URL working bookmark it in your browser for later server access.
If you are not seeing files you know you just uploaded, clear the browser cache or press the refresh button on the browser.Also helps to reload browser when working with edited/updated HTML files, clearing the browsers cache will insure everything gets reloaded after a update!, giving a HTML file a fresh new file name is fast way to make a browser reload everything.
Another strange thing I have found in the URL “/FS/get/home:///download/” if admin. URL “/FS/get/home:////download/” if guest. Most of the existing HTML files work better as guest, This explains a lot of issues of why certain things work and then they don’t with the dream catcher.Admin mode for uploading/downloading files,renaming and deleating, Guest mode for normal server file usage works best, never test a HTML file as Admin unless you know what you are doing.
The backspace/back arrow/right click and back arrow will reactivate the last message window used for message reading, depends on device or browser.
Recommend Chrome or Firefox browsers for best performance, plug-ins can expand function like VLC media player app.
The internal download server works well but Skylark imposes a 61-64 megabyte upload limit on the file size, using the secondary “FAT 32” memory card you can put full movies on it by removing the card from the Dreamcatcher and putting it in a Windows machine and copy files to the secondary memory card. Good news is internal server allows both memory cards to be accessible.
This is my sixth attempt at a HTML server, this one runs on the internal active downloads directory, also using HTML code from Robert that does not depend on IP addresses, use note pad on a windows PC to edit an HTML file,please feel free to change,improve and share a better version of the Othernet file server, (c)2022 GridWorks public domain as is software. Note I have included all the HTML examples/files for Othernet/Dreamcatcher I could get, most things that do not work properly need the network address edited in the HTML file (example 192.168.1.233) you can use a network scanner like Fing to find your dreamcatchers IP address."accessv6.htm,firestickv6 and accessv3" works without the need for IP address in the HTML. URLs and HTML code differ when in admin mode,most HTML code was written for guest mode!
If you have problems look to the Othernet Forum, the dream catcher board comes in a couple different flavors that may force you to tweak the path statements in the HTML. You can add folders to the memory cards and make direct links to PDF or Maps etc...
All media examples are from the public domain and were included to make the package functional, please use your own videos,audio and data files after the initial install. Thank You.
lib/sort: make swap functions more generic
Patch series "lib/sort & lib/list_sort: faster and smaller", v2.
Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions.
The first three patches apply to lib/sort.c.
Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned multiples of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.)
Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch.
Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated.
lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort.
Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size).
Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge.
There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8477f with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes.
The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm.
TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy.
I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing.
This patch (of 5):
Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop.
Despite what ca96ab859ab4 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.)
The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series.
x86-64 code size 872 -> 886 bytes (+14)
With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven.
Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Rasmus Villemoes [email protected] Cc: Geert Uytterhoeven [email protected] Cc: Daniel Wagner [email protected] Cc: Don Mullis [email protected] Cc: Dave Chinner [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Yousef Algadri [email protected] Signed-off-by: Panchajanya1999 [email protected] Signed-off-by: John Vincent [email protected]
power: Introduce OnePlus 3 fingerprintd thaw hack
Taken from Oneplus 3, this hack will make fingerprintd recover from suspend quickly.
Small fixes for newer kernels since we're coming from 3.10.108..
Change-Id: I0166e82d51a07439d15b41dbc03d7e751bfa783b Co-authored-by: Cyber Knight [email protected] [cyberknight777: forwardport and adapt to 4.14] Co-authored-by: Tashfin Shakeer Rhythm [email protected] [Tashar02: forwardport and adapt to 4.19 and xiaomi_sdm660's fp] Signed-off-by: Shreyansh Lodha [email protected] Signed-off-by: Pierre2324 [email protected] Signed-off-by: PainKiller3 [email protected] Signed-off-by: Dhruv [email protected] Signed-off-by: Cyber Knight [email protected] Signed-off-by: Tashfin Shakeer Rhythm [email protected] [kawaaii: Adapt to xiaomi_gauguin's fp] Signed-off-by: kawaaii [email protected]
[MIRROR] Vim mecha changes [MDB IGNORE] (#12981)
- Vim mecha changes (#66153)
This PR changes the following:
fixes a bug with Vim overlays, making it always appear as if there was a pilot inside, even after leaving it
adds a balloon alert when a mob fails to enter the mech due to its size
adds a crafting recipe for Vim in the "robots" category
allows using Vim as a circuit shell
allows small mobs to use the mech as well
My reasoning behind the changes:
fixing the overlay bug - bugfixes good, bugs bad
balloon alert - it should help reduce confusion among players who can't figure why on earth they cannot enter the mech
crafting recipe - I think a crafting recipe will make it a lot more accessible to players, especially because there is no way to learn about its existence in-game
circuit shell - non-standard circuit shells can be pretty fun and people seemed to enjoy the ability to use circuits inside their piano synths or cameras, so I figured we could expand on this, while giving players more ways to interact with sentient pets
maximum mob size increase - Vim has never really been built too often, most likely because even if people got their hands on a sentient pet, it wouldn't probably fit in the tiny mecha anyway. Currently pretty much only butterflies, rats and cockroaches can use Vim and they pretty much never become sentient.
- Vim mecha changes
Co-authored-by: B4CKU [email protected]
Relocates The Entertainment Offices and Custodial Closet on DeltaStation (#17480)
-
Location, Location, Location!
-
Lights and Pipes
I am so sorry for how hacky that disposal piping is
-
TFW Disposals
-
Oh god, what if there is a fire?!
-
And a light switch...
Maybe the final commit? Taking bets on if I managed to forget something else
- If you bet on the requests console
You would be right.
-
Bigger, Better, Janitor
-
Bloody requests console...
i turned swing compoenents into whole processes, i use all namespaces, all refs are in main :McEnroe sorry, i was just practicing my serve :Andy-Samberg how did McEnroe get in here?! :Jonah fuck you, McEnroe!
Add section for warnings about branch diversions
From personal experience and what I've seen in the git-help channel it seems like a few people are struggling with the issue of branch diversion, I'm still doing the html foundations and didn't realize making changes to my readme file via GitHub would cause the branch diversion issue, I think in git basics it should state that making any changes to your files via GitHub can cause issues and if you want to change even the readme file it's best to do it via a commit and push, I'm aware this is covered later on but i still believe making this a note in the git basics section would help a lot of users from running into this issue.
major changes across the board for me
insanely crazy work weekend, and I feel I will be grinding all week. Most of my needs for travel are as good as can be expected, but with glimmer of hopes around the corner everywhere I go.
cleaned up most of the code, and set up some ways to control the game set up as far as I can offer.
I think I got most of my data as needed for this particular set of design ideas. Still need animation connected to my data, but I feel I am very close to understanding this aspect of design. so damn close, yet it elludes me.
Added a Pause game when the player hits 250 bricks.
If you click on the GameMngr - you will see several variables I am still wiring up to offer a single source for design options.
scene: SCENE_DesignTest2.5D_R
Hit play in Unity in this scene, it opens to a single line of green coins, which are currently hardwired to be classified as a "Good Brick." This means it pushes positive values to the GameMngr and ScoringSystems, triggering the scoring system with values relating to: Current: TB NoteBrick Total Count GB Good Brick Count - number of Good bricks obtained BB Bad Brick Count - number of Good bricks obtained LT Loop ticker - this counts the number of bricks which make up a "loop" in the song structure LC Loop Counter - after a certain number of ticks from Loop Ticker (current: 2), add 1 to this ST State Ticker - Loop Counter toggles a new change, this State Ticker reset. Sets value for number loops in a State Change SC State Count - current StateNumber - used for Testing code, will be removed and replaced with a new system, noted below
Once this little system was in place, I worked on setting up some basic state changes in the Brick we are currently collecting: State 1 - bounce in place State 2 - Scales State 3 - Light changes State 4 - Back to State 1, essentially
What systems are missing from gameplay: Audio Focus Brick System Data Save/Push/Retrieval
What I want to change (but the current system mostly works still, so...) ScoringSystems: TB GB BB GameMngr: ScoringSystemCounts Player Speed Game Speed UI Controls Voume Data Storage, min, DataPush perferred FMOD(?)
UI StartGame Pause EndGame QuitGame
fuck you Merge branch 'master' of https://github.com/Akshay030595/Akshay030595
last commit for tonight
ahhh it's morning already, anyway, this add something that i have already forgot, i have worked on this for so long that my brain is blank
tuning formula damage calculation
the basic formula previously was:
(base_potency * formula_level_mod) * (spell_mod * spell_mod)
we can modify this formula on a spell-by-spell basis, depending on how much of an effect on potency we want both our magic stat and formula level to have. for example, when it comes to balancing, it would make sense that Formulas acquired early on in-game don't end up scaling ridiculously high, damage-wise. this was an issue i had with Secret of Evermore's alchemy system: almost ANY spell could be overpowered if it was levelled enough. it decentivized using newly acquired formulas over old ones that you'd been levelling. i would like to balance damage formula acquisition in such a way that if you are a heavy user of formulas for damage, using a newly acquired formula might not do as much damage as one you've been levelling, but in the longer-term, it will end up out-performing formulas acquired earlier on. in other words, formulas acquired early on should not have their potency affected by formula level AS MUCH as formulas acquired later. so maybe some examples to illustrate: Hardball - which is one of if not the first formula you acquired in the game - has a base_potency of 4. this number gets multiplied by 1 + (formula_level / 4). the result of that gets multiplied by spell_mod * spell_mod. in other words, base_potency goes up by 1 for each level of Hardball. Lightning Strikes has a base_potency of 16. this number gets multiplied by 1 + (formula_level / 8), meaning base_potency goes up by 2 for each level of Lightning Strikes. basically, the potency increase of a formula goes up by its base_potency / the number we divide formula_level by. so for Hardball, it's 4 / 4 = 1 potency per level. for Lightning Strikes it's 16 / 8 = 2 potency per level. either way, the result of each calculation gets multiplied by spell_mod * spell_mod (spell_mod is 1 + magic / 32). also, there should be a cap on formula_level. i'm thinking maybe 20. or maybe the cap can be increased as the game progresses. this would be a more rigid way of hard-balancing, as it would prevent players from "over-levelling" formulas before a certain amount of game progress has been achieved. then again, it might be fun for some to give the option to over-level. the amount of spellcasts is limited by ingredients, after all, and ingredients won't be free. also, bug to fix: Lightning Strikes instances new lightning strike hitboxes on each create_lightning_strike(). these new instances get their stats on instantiation, meaning if you use Lightning Strikes and it levels up, the extra lightning strikes instanced after the level will calculate their damage accordingly, as opposed to doing so using pre-level up values.
Fix for fishing minigame to not fuck with xeanes stupid ass adriftus chest
Updates SC_CHANGEUNDEAD behavior (#6867)
- Fixes #6834.
- Versus Players
- Animation will be properly displayed for Blessing/Increase Agility when the target has Change Undead active (buffs are not applied even though animation is displayed).
- Target can no longer be killed through the single damage applied by Blessing/Increase Agility and Change Undead.
- If the target has Curse and Stone active, only Curse is removed by Blessing first (buffs are not applied).
- Shadow or Undead armor have no impact on Blessing or Increase Agility at all.
- Versus Monsters
- Blessing is applied normally to the target as long as it's not an Undead element or Demon race.
- Blessing does not cancel out Curse or Stone. Thanks to @Playtester!
Fix low priority issues (#7413)
Thanks @svetkereMS for bringing this up, driving, and testing.
This fixes two interconnected issues. First, if a process starts at normal priority then changes to low priority, it stays at normal priority. That's good for Visual Studio, which should stay at normal priority, but we relied on passing priority from a parent process to children, which is no longer valid. This ensures that we set the priority of a process early enough that we get the desired priority in worker nodes as well.
Second, if we were already connected to normal priority worker nodes, we could keep using them. This "shuts down" (disconnects—they may keep running if nodeReuse is true) worker nodes when the priority changes between build submissions.
One non-issue (therefore not fixed) is connecting to task hosts that are low priority. Tasks host nodes currently do not store their priority or node reuse. Node reuse makes sense because it's automatically off always for task hosts, at least currently. Not storing low priority sounds problematic, but it's actually fine because we make a task host—the right priority for this build, since we just made it—and connect to it. If we make a new build with different priority, we disconnect from all nodes, including task hosts. Since nodeReuse is always false, the task host dies, and we cannot reconnect to it even though if it didn't immediately die, we could, erroneously.
On the other hand, we went a little further and didn't even specify that task hosts should take the priority assigned to them as a command line argument. That has been changed.
svetkereMS had a chance to test some of this. He raised a couple potential issues:
conhost.exe launches as normal priority. Maybe some custom task dlls or other (Mef?) extensions will do something between MSBuild start time and when its priority is adjusted. Some vulnerability if MSBuild init code improperly accounts for timing For (1), how is conhost.exe related to MSBuild? It sounds like a command prompt thing. I don't know what Mef is. For (2), what vulnerability? Too many processes starting and connecting to task hosts with different priorities simultaneously? I could imagine that being a problem but don't think it's worth worrying about unless someone complains.
He also mentioned a potential optimization if the main node stays at normal priority. Rather than making a new set of nodes, the main node could change the priority of all its nodes to the desired priority. Then it can skip the handshake, and if it's still at normal priority, it may be able to both raise and lower the priority of its children. Since there would never be more than 2x the "right" number of nodes anyway, and I don't think people will be switching rapidly back and forth, I think maybe we should file that as an issue in the backlog and get to it if we have time but not worry about it right now.
Edit: I changed "shuts down...worker nodes when the priority changes" to just changing their priority. This does not work on linux or mac. However, Visual Studio does not run on linux or mac, and VS is the only currently known customer that runs in normal priority but may change between using worker nodes at normal priority or low priority. This approach is substantially more efficient than starting new nodes for every switch, disconnecting and reconnecting, or even maintaining two separate pools for different builds.