1,575,327 events recorded by gharchive.org of which 1,575,327 were push events containing 2,088,959 commit messages that amount to 121,361,369 characters filtered with words.py@e23d022007... to these 36 messages:
Security Zone Update (#201)
-
Update baystation12.dme
-
9mm/.45ACP Buff
Changes are as follow:
- Buffed 9mm dam from 8 to 25 (now it doesn't take a whole mag to take down an unarmoured man)
- Buffed .45 dam from 10 to 30
- Nerfed 9mm AP from 34 to 30
- Buffed 7,62 dam from 40 to 50 (It's supposed to be beefier than 5,56mm)
- Organ changes
Lower organ health value to make combat much deadlier. Headshots are truly lethal now.
- Slight rebalance and renamings
List of changes :
- decreased brain health to 150 (instead of 200), it's high enough that medical assistance can be given if fast but low enough that you don't want to get shot
- increased damage values of weapons to baystation/nebula level (40 for a pistol for eg)
- increased adrenalin generation when hurt (less fading in and out, you can still use your gun when hit and pain won't be such a pain in the ass, but you're less likely to get back up once the final shot hits you)
- decreased relative size of lungs from 60% to 30% so that now, getting hit in the chest won't have as much chance of damaging your poor fucking lungs (yes 60% is the original baystation number.. it makes sense, it's a large organ, but it's a pain in the ass)
- changed some names and descriptions of certain weapons and firearms to better fit established naming convention -made revolvers cycle the barrel instead of eject each shot (it's a revolver, not a damn rifle)
- slightly decreased firing delay for the mk9 revolver (slightly weaker than the mateba so slightly faster firing)
- decreased firing delay for the mk9 pistol (lower caliber, less recoil, easier to magdump)
- Mateba fix
Fixed a misstype for the mateba that incorrectly qualified its caliber.
- Mateba fix
Fixing typo
- The big Security Zone update
Main changes:
- Each zone have unique uniform and gears now (MTF is dressed tactically and well armoured to act as the toughest defense force for the critical area in HCZ and to pass as a rapid response team, EZ gets slightly formal but still utilitarian gears to accomplish their internal security and bodyguarding tasks, and LCZ largely remains the same)
- All security guards spawns with most of their gears on themselves and in their satchel (with the exception of webbings, thigh holster and rifle/p90)
- Nerfed Beanbag damage from 10 to 3 so that it will stop causing bleeding injuries to CDs, making it more usable as a non lethal ammunition
- Slightly nerfed rubber P90 agony damage
Co-authored-by: tichys [email protected]
[READY] [KC13] Showing "The Derelict" some love: General updates, aesthetic changes and misc (#67696)
With this PR I aim to make KC13 (TheDerelict.dmm), or Russian Station (whatever you guys call it) a tad bit more flavorful with its environment as well as somethings on the mapping side (like adding area icons!). To preface, no, I'm not remapping anything here extensively. The general layout should be relatively the same (or should be in theory).
Halfway through naming the area icons I checked the wiki page and found out it was KC not KS, so, its KS13 internally.
Readability for turf icons are cool. Also just making the ruin more eye appealing would be better. General cleanup and changes will give new life to this rather.. loved? Hated? Loot pinata? Ruin. The ruin also now starts completely depowered, like Old Station (its a Derelict, it makes no sense for it to still be powered after so long). As for some mild compensation, a few more batteries were sprinkled in to offset any issues. If there is any concern of "But they'll open the vault faster!", there were always 5 batteries that people used to make the vault SMES. Lastly, giving it some "visual story telling" is cool, as mapping fluff goes.
I also added a subtle OOC hint that the SMES in the northern most solar room needs a terminal with the following:
SMES Jumpscare As an aside, I aim to try and keep the feel of this ruin being "dated" while at the same time having some of our newer things. With that, certain things I'll opt out of using in favor of more "generic" structures to give KC13 that true "Its old but not really" feel and look.
Use secure RNG to generate passwords (#2726)
- use secure rng to generate passwords
quoting MDN:
Math.random() does not provide cryptographically secure random numbers. Do not use them for anything related to security. Use the Web Crypto API instead
My rng is kinda shitty, i know there is some fast way to cut down higher digits to get a digit in range without introducing bias, but i also know that other people have introduced bias by trying to do that on an initially secure rng and getting it wrong (iirc it's discussed here? https://www.youtube.com/watch?v=LDPMpc-ENqY - been years since i saw the talk, but i know Lavavej discussed it in one of his presentations, i think it was that one) , but anyway this is fast enough, and secure.
-
shorter name
-
randomString2 / centralize js string generation
-
missed 2
Life is one big road with lots of signs. So when you riding through the ruts, don't complicate your mind. Flee from hate, mischief and jealousy. Don't bury your thoughts, put your vision to reality. Wake Up and Live!
Ball Man is Better AND I fixed some other shit
The original implementation of Ball Man was dictated solely by whether or not your grounded. Undoing Ball Man is still dictated by grounded but becoming Ball Man is now determined by: Jumping, Grappling, or Zooming!
I also fixed some glitches with dashing. Namely this glitch: "if you press the dash button and no direction it will wait until the next direction you press then immediately dash."
I also made a slight modification to zooming. Collisions no longer completely halt momentum, I wonder if we'll all like that or if we'll be changing it back because we're sliding around everywhere like a fucking greased hog.
Overheat warnings to both gatling guns (#742)
-
The notorious
-
Epic
-
FUCK YOU
-
I am going to beat you with a club
nix-shell: restore backwards-compat with old nixpkgs
Basically an attempt to resume fixing #5543 for a breakage introduced
earlier[1]. Basically, when evaluating an older nixpkgs
with
nix-shell
the following error occurs:
λ ma27 [~] → nix-shell -I nixpkgs=channel:nixos-18.03 -p nix
error: anonymous function at /nix/store/zakqwc529rb6xcj8pwixjsxscvlx9fbi-source/pkgs/top-level/default.nix:20:1 called with unexpected argument 'inNixShell'
at /nix/store/zakqwc529rb6xcj8pwixjsxscvlx9fbi-source/pkgs/top-level/impure.nix:82:1:
81|
82| import ./. (builtins.removeAttrs args [ "system" "platform" ] // {
| ^
83| inherit config overlays crossSystem;
This is a problem because one of the main selling points of Nix is that
you can evaluate any old Nix expression and still get the same result
(which also means that it still evaluates). In fact we're deprecating,
but not removing a lot of stuff for that reason such as unquoted URLs[2]
or builtins.toPath
. However this property was essentially thrown away
here.
The change is rather simple: check if inNixShell
is specified in the
formals of an auto-called function. This means that
{ inNixShell ? false }:
builtins.trace inNixShell
(with import <nixpkgs> { }; makeShell { name = "foo"; })
will show trace: true
while
args@{ ... }:
builtins.trace args.inNixShell
(with import <nixpkgs> { }; makeShell { name = "foo"; })
will throw the following error:
error: attribute 'inNixShell' missing
This is explicitly needed because the function in
pkgs/top-level/impure.nix
of e.g. NixOS 18.03 has an ellipsis[3], but
passes the attribute-set on to another lambda with formals that doesn't
have an ellipsis anymore (hence the error from above). This was perhaps
a mistake, but we can't fix it anymore. This also means that there's
AFAICS no proper way to check if the attr-set that's passed to the Nix
code via EvalState::autoCallFunction
is eventually passed to a lambda
with formals where inNixShell
is missing.
However, this fix comes with a certain price. Essentially every
shell.nix
that assumes inNixShell
to be passed to the formals even
without explicitly specifying it would break with this[4]. However I think
that this is ugly, but preferable:
-
Nix 2.3 was declared stable by NixOS up until recently (well, it still is as long as 21.11 is alive), so most people might not have even noticed that feature.
-
We're talking about a way shorter time-span with this change being in the wild, so the fallout should be smaller IMHO.
[1] https://github.com/NixOS/nix/commit/9d612c393abc3a73590650d24bcfe2ee57792872
[2] NixOS/rfcs#45 (comment)
[3] https://github.com/NixOS/nixpkgs/blob/release-18.03/pkgs/top-level/impure.nix#L75
[4] See e.g. the second expression in this commit-message or the changes
for tests/ca/nix-shell.sh
.
Revisiting The Goliath: Or, that time I dripped out the SBC Starfury just because (#68126)
Drips the SHIT out of the SBC Starfury while not completely overhauling it. Touches everything NOT in engineering or southward (because I love how scuffed that part is and refuse to touch it on principle) - Also converts one map varedit into a real boy subtype, and moves tiny fans to their own file.
Mandatory disclosure on the gameplay changes: Fighters 1 and 3 are now NOT in the hangar, and are now attached to the formerly unused gunnery rooms. Cryo now works. Yeah. I know. You can actually open the anesthetic closet now. Everyone now shares three spawners. This doesn't reduce the amount of people who can play when this rolls, as I've adjusted var/uses in accordance: it just reduces clutter. A few of the horizontal double airlocks have been compacted into glass_large airlocks. The bar windows now actually have grilles like they were meant to. Four turbines have shown up. They aren't functional*, they just look like gunnery and conveniently fit in the spots. I'm sure this is space OSHA compliant. The map is ever so slightly smaller, vertically. This should distance us from an edge case where somehow all space levels are too cluttered for this to spawn properly, for the time being.
*Technically there's nothing stopping you from using them besides the amount of time it'd take for the operatives to kick your ass
This map was originally designed wayyy back before we even had the computer sprites we have now, (#27760 if you want to see SOUL) and it shows. While it will never have it's SM again, we can at least make the thing much nicer to look at.
Planar/Undead summon balance pass
closes #482 closes #481
Hound Archon: 11 Outsider/4 Fighter -> 10 Outsider 18 str -> 17 str AC 24 -> AC 21 No weapon spec No toughness Greatsword +1 d8 fire vs. evil -> Longsword +1 d8 fire vs. evil
Green Slaad: 9 Outsider -> 10 Outsider Neutral Evil -> Chaotic Neutral Spells 1 x Hammer of the Gods [Caster Level:10] 1 x Negative Energy Burst [Caster Level:10] 2 x Scintillating Sphere [Caster Level:10] Monster Abilities 4 x Chaos Spittle 1 x Summon Slaad +2 enhancement bonus on creature weapons
Succubus: 5 Rogue/5 Outsider -> 10 Outsider Spells 2 x Charm Monster [Caster Level:10] 2 x Dominate Person [Caster Level:10] 1 x Evard's Black Tentacles [Caster Level:10] 3 x Vampiric Touch [Caster Level:10] +2 Enchantment Bonus on creature weapons, 1d6 damage bonus (up from 1d3), still has level drain on hit
Red Slaad: Neutral Evil -> Chaotic Neutral 7 outsider -> 8 outsider 22 str -> 18 str 16 dex -> 13 dex 20 con -> 16 con Monster Abilities 2 x Chaos Spittle 1 x Howl, Stunning
Skeleton Warrior: Undead 6 -> Undead 5 Chainmail +1 -> Chainmail Spear +1 -> Spear Spell Resistance 14 -> Spell Resistance 12 Turn Resistance +4 -> Turn Resistance +2
Skeleton Chieftian: Undead 7 Spear +1 Banded Mail Spell Resistance 14 Turn Resistance +4
Replaced Create Undead lvl 12 and under with Ghoul with Ghoul Lord Spells 1 x Animate Dead [Caster Level:10] 3 x Ghoul Touch [Caster Level:10] 2 x Haste [Caster Level:10] Monster Abilities 1 x Aura of Menace (doom) +1 enhancement bonus on creature weapons, stuns on bite attack
Replaced lvl 12 Create Undead Ghast with Ghast Ravager Monster Abilities 1 x Aura of Unnatural (fear) 1 x Tyrant Fog Zombie Mist 1 x Rage Better physical stats than Ghoul Lord +2 enhancement bonus, hold on claw hits 50% / 2 rounds
Lantern Archon: Spell caster levels changed to 8 Spells 1 x Aid [Caster Level:8] 1 x Magic Circle against Alignment [Caster Level:8] 3 x Searing Light [Caster Level:8] Monster Abilities 2 x Pulse, Holy
It's kinda tricky this framework, I spent hours trying to achieve a modal window and it was as easy as just put position: fixed, not absolute and NOT DOING TEDIOUS MEDIA QUERYS MY GOSH. I HATE FRONT END
Do not allow focus to drift from fullscreen client via focusstack()
It generally doesn't make much sense to allow focusstack() to navigate away from the selected fullscreen client, as you can't even see which client you're selecting behind it.
I have had this up for a while on the wiki as a separate patch[0], but it seems reasonable to avoid this behaviour in dwm mainline, since I'm struggling to think of any reason to navigate away from a fullscreen client other than a mistake.
0: https://dwm.suckless.org/patches/alwaysfullscreen/
Update README.md
Project Description: In this particular project, we are using the insurance.csv dataset that contains information like age, sex, bmi, children, smoker, region, charges etc. and using that to predict insurance charges. However, before you go ahead and make a prediction, it is advised that you first pre-process the data, since it may contain some irregularities and noise. In addition, try various tricks and techniques in order to gain the best accuracy Column Details:
- age: self-explanatory
- sex: male or female
- bmi: body mass index
- children: number of children the person has
- smoker: Yes/No
- region: self-explanatory
- charges: self-explanatory Part-1: Data Exploration and Pre-processing
- Load the given dataset
- Fill Null value of children column with the value 0
- Replace the Null values of the column bmi with mean value
- Display a scatter plot between age and children
- Display bar plot between bmi and children
- Perform encoding to convert character data into numerical data
- Perform scaling Part-2: Working with Models
- Separate feature data from target data
- Create a Linear regression model between Features and target data
- Display the test score and training score Skills on your tips www.fingertips.co.in+91-780.285.8907
- Extract slope and intercept value from the model
- Display Mean Squared Error
- Display Mean Absolute Error
- Display Root mean Squared error
- Display R2 score
add User Examples holy shi
i really hope my server doesnt fucking die for no reason
Manually copy trailing attributes on a resize (#12637)
This is a fairly naive fix for this bug. It's not terribly performant, but neither is resize in the first place.
When the buffer gets resized, typically we only copy the text up to the
MeasureRight
point, the last printable char in the row. Then we'd just
use the last char's attributes to fill the remainder of the row.
Instead, this PR changes how reflow behaves when it gets to the end of the row. After we finish copying text, then manually walk through the attributes at the end of the row, and copy them over. This ensures that cells that just have a colored space in them get copied into the new buffer as well, and we don't just blat the last character's attributes into the rest of the row. We'll do a similar thing once we get to the last printable char in the buffer, copying the remaining attributes.
This could DEFINITELY be more performant. I think this current implementation walks the attrs on every cell, then appends the new attrs to the new ATTR_ROW. That could be optimized by just using the actual iterator. The copy after the last printable char bit is also especially bad in this regard. That could likely be a blind copy - I just wanted to get this into the world.
Finally, we now copy the final attributes to the correct buffer: the new one. We used to copy them to the old buffer, which we were about to destroy.
I'll add more gifs in the morning, not enough time to finish spinning a release Terminal build with this tonight.
Closes #32 🎉🎉🎉🎉🎉🎉🎉🎉🎉 Closes #12567
(cherry picked from commit 855e1360c0ff810decf862f1d90e15b5f49e7bbd)
sched/core: Fix ttwu() race
Paul reported rcutorture occasionally hitting a NULL deref:
sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM
Debugging showed that this only appears to happen when we take the new code-path from commit:
2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")
and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
this would've unconditionally hit:
smp_cond_load_acquire(&p->on_cpu, !VAL);
and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.
The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.
Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.
The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):
X->cpu = 1
rq(1)->curr = X
CPU0 CPU1 CPU2
// switch away from X
LOCK rq(1)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 9
switch_to(Z)
X->on_cpu = 0
UNLOCK rq(1)->lock
// migrate X to cpu 0
LOCK rq(1)->lock
dequeue_task(X)
set_task_cpu(X, 0)
X->cpu = 0
UNLOCK rq(1)->lock
LOCK rq(0)->lock
enqueue_task(X)
X->on_rq = 1
UNLOCK rq(0)->lock
// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
X->on_cpu = 1
UNLOCK rq(0)->lock
// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb(); // wake X
ttwu()
LOCK X->pi_lock
smp_mb__after_spinlock
if (p->state)
cpu = X->cpu; // =? 1
smp_rmb()
// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 0
if (p->on_rq)
smp_rmb();
if (p->on_cpu && ttwu_queue_wakelist(..)) [*]
smp_cond_load_acquire(&p->on_cpu, !VAL)
cpu = select_task_rq(X, X->wake_cpu, ...)
if (X->cpu != cpu)
switch_to(Y)
X->on_cpu = 0
UNLOCK rq(0)->lock
However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.
(Most of the previous ttwu() races were found on very large PowerPC)
Nevertheless, this fully explains the observed failure case.
Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: kawaaii [email protected]
random: use linear min-entropy accumulation crediting
commit c570449094844527577c5c914140222cb1893e3f upstream.
30e37ec516ae ("random: account for entropy loss due to overwrites") assumed that adding new entropy to the LFSR pool probabilistically cancelled out old entropy there, so entropy was credited asymptotically, approximating Shannon entropy of independent sources (rather than a stronger min-entropy notion) using 1/8th fractional bits and replacing a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly underestimate it. This wasn't superb, but it was perhaps better than nothing, so that's what was done. Which entropy specifically was being cancelled out and how much precisely each time is hard to tell, though as I showed with the attack code in my previous commit, a motivated adversary with sufficient information can actually cancel out everything.
Since we're no longer using an LFSR for entropy accumulation, this probabilistic cancellation is no longer relevant. Rather, we're now using a computational hash function as the accumulator and we've switched to working in the random oracle model, from which we can now revisit the question of min-entropy accumulation, which is done in detail in https://eprint.iacr.org/2019/198.
Consider a long input bit string that is built by concatenating various smaller independent input bit strings. Each one of these inputs has a designated min-entropy, which is what we're passing to credit_entropy_bits(h). When we pass the concatenation of these to a random oracle, it means that an adversary trying to receive back the same reply as us would need to become certain about each part of the concatenated bit string we passed in, which means becoming certain about all of those h values. That means we can estimate the accumulation by simply adding up the h values in calls to credit_entropy_bits(h); there's no probabilistic cancellation at play like there was said to be for the LFSR. Incidentally, this is also what other entropy accumulators based on computational hash functions do as well.
So this commit replaces credit_entropy_bits(h) with essentially total = min(POOL_BITS, total + h)
, done with a cmpxchg loop as before.
What if we're wrong and the above is nonsense? It's not, but let's assume we don't want the actual behavior of the code to change much. Currently that behavior is not extracting from the input pool until it has 128 bits of entropy in it. With the old algorithm, we'd hit that magic 128 number after roughly 256 calls to credit_entropy_bits(1). So, we can retain more or less the old behavior by waiting to extract from the input pool until it hits 256 bits of entropy using the new code. For people concerned about this change, it means that there's not that much practical behavioral change. And for folks actually trying to model the behavior rigorously, it means that we have an even higher margin against attacks.
Cc: Theodore Ts'o [email protected] Cc: Dominik Brodowski [email protected] Cc: Greg Kroah-Hartman [email protected] Reviewed-by: Eric Biggers [email protected] Reviewed-by: Jean-Philippe Aumasson [email protected] Signed-off-by: Jason A. Donenfeld [email protected] Signed-off-by: Greg Kroah-Hartman [email protected]
Exploration PP - Reworks Outpost Nuke Announcement (#450)
-
Fuck you, die
-
Update nuke_ruin.dm
Stupid " broke everything fuck coding it will make your life very miserable.
My GitHub.
Hello! My name is Jesus Alberto. I am a reckless life-long learner with a college degree of Bachelor in Health and Safety. But I'm in love with technology, so I'm self-taught into Python and its environment. My principal skills are related to writing, research, translating, resuming and analytic thinking; but truly, I've always been eager to learn something new and willing to spread my data science skills to all fields, such as full-stack programming for web, desktop and mobile.
Add AGI & CoCoSci
https://github.com/YuzheSHI/awesome-agi-cocosci/#readme
I add AGI & CoCoSci to Awesome to fill-in its absence of academic resources on artificial general intelligence and computational cognitive sciences.
- AGI & CoCoSci provides users with following resources:
- Classic and cutting-edge research papers and books categorized in diverse intelligent phenomena, with direct links and links to Google Scholar All Versions, some of which also include links to codes and project websites;
- Established institutes and researchers on related topics;
- Eminent scientists and thinkers on related topics with their historical representative books;
- BibTex templates for composing papers with latex on related topics.
Artificial General Intelligence an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences as majority, alone with probability and mathematical statistics, formal logic, cognitive and developmental psychology, computational philosophy, cognitive neuroscience, and computational sociology. We are promoting high-level machine intelligence by getting inspirations from the way that human learns and thinks, while obtaining a deeper understanding of human cognition simultaneously. We believe that this kind of reciprocative research is a potential way towards our big picture: building intelligent agents with the capacity to handle human-level tasks such as abstracting, explaining, learning, planning, and making decisions.
Please read it multiple times. I spent a lot of time on these guidelines and most people miss a lot.
- Don't waste my time. Do a good job, adhere to all the guidelines, and be responsive.
- You have to review at least 2 other open pull requests. Try to prioritize unreviewed PRs, but you can also add more comments to reviewed PRs. Go through the below list when reviewing. This requirement is meant to help make the Awesome project self-sustaining. Comment here which PRs you reviewed. You're expected to put a good effort into this and to be thorough. Look at previous PR reviews for inspiration. Just commenting “looks good” or simply marking the pull request as approved does not count! You have to actually point out mistakes or improvement suggestions.
- You have read and understood the instructions for creating a list.
- This pull request has a title in the format
Add Name of List
.- ✅
Add Swift
- ✅
Add Software Architecture
- ❌
Update readme.md
- ❌
Add Awesome Swift
- ❌
Add swift
- ❌
add Swift
- ❌
Adding Swift
- ❌
Added Swift
- ✅
- Your entry here should include a short description about the project/theme of the list. It should not describe the list itself. The first character should be uppercase and the description should end in a dot. It should be an objective description and not a tagline or marketing blurb.
- ✅
- [iOS](…) - Mobile operating system for Apple phones and tablets.
- ✅
- [Framer](…) - Prototyping interactive UI designs.
- ❌
- [iOS](…) - Resources and tools for iOS development.
- ❌
- [Framer](…)
- ❌
- [Framer](…) - prototyping interactive UI designs
- ✅
- Your entry should be added at the bottom of the appropriate category.
- The title of your entry should be title-cased and the URL to your list should end in
#readme
.- Example:
- [Software Architecture](https://github.com/simskij/awesome-software-architecture#readme) - The discipline of designing and building software.
- Example:
- The suggested Awesome list complies with the below requirements.
- Has been around for at least 30 days.
That means 30 days from either the first real commit or when it was open-sourced. Whatever is most recent. - Don't open a Draft / WIP pull request while you work on the guidelines. A pull request should be 100% ready and should adhere to all the guidelines when you open it.
- Run
awesome-lint
on your list and fix the reported issues. If there are false-positives or things that cannot/shouldn't be fixed, please report it. - The default branch should be named
main
, notmaster
. - Includes a succinct description of the project/theme at the top of the readme. (Example)
- ✅
Mobile operating system for Apple phones and tablets.
- ✅
Prototyping interactive UI designs.
- ❌
Resources and tools for iOS development.
- ❌
Awesome Framer packages and tools.
- ✅
- It's the result of hard work and the best I could possibly produce. If you have not put in considerable effort into your list, your pull request will be immediately closed.
- The repo name of your list should be in lowercase slug format:
awesome-name-of-list
.- ✅
awesome-swift
- ✅
awesome-web-typography
- ❌
awesome-Swift
- ❌
AwesomeWebTypography
- ✅
- The heading title of your list should be in title case format:
# Awesome Name of List
.- ✅
# Awesome Swift
- ✅
# Awesome Web Typography
- ❌
# awesome-swift
- ❌
# AwesomeSwift
- ✅
- Non-generated Markdown file in a GitHub repo.
- The repo should have
awesome-list
&awesome
as GitHub topics. I encourage you to add more relevant topics. - Not a duplicate. Please search for existing submissions.
- Only has awesome items. Awesome lists are curations of the best, not everything.
- Does not contain items that are unmaintained, has archived repo, deprecated, or missing docs. If you really need to include such items, they should be in a separate Markdown file.
- Includes a project logo/illustration whenever possible.
- Either centered, fullwidth, or placed at the top-right of the readme. (Example)
- The image should link to the project website or any relevant website.
- The image should be high-DPI. Set it to maximum half the width of the original image.
- Entries have a description, unless the title is descriptive enough by itself. It rarely is though.
- Includes the Awesome badge.
- Should be placed on the right side of the readme heading.
- Can be placed centered if the list has a centered graphics header.
- Should link back to this list.
- Should be placed on the right side of the readme heading.
- Has a Table of Contents section.
- Should be named
Contents
, notTable of Contents
. - Should be the first section in the list.
- Should only have one level of nested lists, preferably none.
- Must not feature
Contributing
orFootnotes
sections.
- Should be named
- Has an appropriate license.
- We strongly recommend the CC0 license, but any Creative Commons license will work.
- Tip: You can quickly add it to your repo by going to this URL:
https://github.com/<user>/<repo>/community/license/new?branch=main&template=cc0-1.0
(replace<user>
and<repo>
accordingly).
- Tip: You can quickly add it to your repo by going to this URL:
- A code license like MIT, BSD, Apache, GPL, etc, is not acceptable. Neither are WTFPL and Unlicense.
- Place a file named
license
orLICENSE
in the repo root with the license text. - Do not add the license name, text, or a
Licence
section to the readme. GitHub already shows the license name and link to the full text at the top of the repo. - To verify that you've read all the guidelines, please comment on your pull request with just the word
unicorn
.
- We strongly recommend the CC0 license, but any Creative Commons license will work.
- Has contribution guidelines.
- The file should be named
contributing.md
. Casing is up to you. - It can optionally be linked from the readme in a dedicated section titled
Contributing
, positioned at the top or bottom of the main content. - The section should not appear in the Table of Contents.
- The file should be named
- All non-important but necessary content (like extra copyright notices, hyperlinks to sources, pointers to expansive content, etc) should be grouped in a
Footnotes
section at the bottom of the readme. The section should not be present in the Table of Contents. - Has consistent formatting and proper spelling/grammar.
- The link and description are separated by a dash.
Example:- [AVA](…) - JavaScript test runner.
- The description starts with an uppercase character and ends with a period.
- Consistent and correct naming. For example,
Node.js
, notNodeJS
ornode.js
.
- The link and description are separated by a dash.
- Doesn't use hard-wrapping.
- Doesn't include a Travis badge.
You can still use Travis for list linting, but the badge has no value in the readme. - Doesn't include an
Inspired by awesome-foo
orInspired by the Awesome project
kinda link at the top of the readme. The Awesome badge is enough.
Go to the top and read it again.
shpipe.py: Fix 'c', 'cv', 'g', 'm', and 'q', per review by my friend S~
-- bin/byotools.py --
rename to 'def shlex_dquote' from 'def shlex_min_quote'
-- bin/git.py --
correct help for 'git.py --' to: git checkout but keeping dreaming of: compaction for qssi = git status --short --ignored
add 3 Chapter Titles to the 3 Git Cheatsheets add the 4th Chapter of just 1 Line cooperate with rename to 'def shlex_dquote' from 'def shlex_min_quote'
-- bin/shpipe.py --
change to match 'bin/byotools.py'
add quirk calls 'make --' even for Make's that can't distinguish 'make --' from 'make'
amp up 'c' with no Parms to choose 'cat - >/dev/null' at Tty, else fall back to 'cat -'
tweak 'cv --' to run as 'cat -ntv' in place of 'cat -ntv --'
tweak 'g' to run as '--color=yes' at Tty Stdout, when 'not (options or seps)' and tweak 'g' with no Positional Args to run as 'g .'
tweak 'm' with no Parms to run as 'make --' with that Sep
code up 'q' redundantly as 'git checkout' here, just to occupy the 'qb/q' space next door to the 'qbin/q' space of 'git.py'
@ def exit_via_shpipe_shproc accept the ' >' mark inside 'cat - >/dev/null' as a mark for 'bash -c' to take, as if 'subprocess.run.shell=True'
tweak up the 'cv -ntv' example to run as 'cv --'
shuffle the FixMe Dreams to end-of-file
msm_thermal: simplified thermal driver
Thermal driver by franco. This is a combination of 9 commits:
msm: thermal: add my simplified thermal driver. Stock thermal-engine-hh goes crazy too soon and too often and offers no way for userland to tweak its parameters
Signed-off-by: franciscofranco [email protected]
msm: thermal: moar magic
Added a sample time between heat levels. The hotter it is, the longer it should stay throttled in that same freq level therefore cooling this down effectively. Also due to this change freqs have been slightly adjusted. Now the driver will start a bit earlier on boot. Few cosmetic changes too because why the fuck not.
Signed-off-by: Francisco Franco [email protected]
msm: thermal: reduce throttle point to 60C
Signed-off-by: Francisco Franco [email protected]
msm: thermal: rework previous patches
The changes on previous patches didn't really work out. Either the device just reboots for infinity during boot because if it reaches high temperatures and fails to throttle down fast enough to mitigate it, or it usually just crashes the device while benchmarking on Geekbench or Antutu again because when there's a big ass temp spike this doesn't mitigate fast enough.
These changes are confirmed working after testing in all scenarios previously described.
Signed-off-by: Francisco Franco [email protected]
msm: thermal: work faster with more thrust
Last commit was not enough, it mitigated most of the issues, but some users were still having weird shits because temperature wasn't going down as fast as it should. So now queue it every fucking 100ms in a dedicated high prio workqueue. It's my last stance!
Signed-off-by: Francisco Franco [email protected]
msm: thermal: offline cpu2 and cpu3 if things get REALLY rough
Just for safe measure put cpu2 and cpu3 to sleep if the heat gets way bad. Also the polling time gets back to default stock 250ms since the earlier 100ms change was just a band-aid for a nastier bug that got fixed in the mean time.
Signed-off-by: Francisco Franco [email protected]
msm_thermal: send OFF/ONLINE uevent in hotplug cases
Send the correct uevent after setting a CPU core online or offline. This allows ueventd to set correct SELinux labels for newly created sysfs CPU device nodes.
Bug: 28887345 Change-Id: If31b8529b31de9544914e27514aca571039abb60 Signed-off-by: Siqi Lin [email protected] Signed-off-by: Thierry Strudel [email protected] [Francisco: slightly adapted from Qcom's original patch to apply] Signed-off-by: Francisco Franco [email protected]
Revert "msm_thermal: send OFF/ONLINE uevent in hotplug cases"
Crashes everything if during early early boot the device is hot and starts to throttle. It's madness!
This reverts commit 80e38963f8080c3c9d26374693dd0f0a88f8060b.
msm: thermal: return to original simplified driver
Some users still had a weird issue that I was unable to reproduce which either consisted in cpu2 and cpu3 getting stuck in offline mode or after a gaming session while charging the device would crash with "hw_reset" and then the device would loop in bootloader -> boot animation forever until the device cooled down.
My test was leaving the device charging during the night, brightness close to max and running Stability Test app with the CPU+GPU suite. I woke up and the device was still running it flawlessly. Rebooted while hot and it booted just fine.
Since I was unable to reproduce the issue and @osm0sis flashed back to <r92 and was unable to reproduce it anymore here we go back to that stage.
Only change I made compared to that original driver was simply queue things into a dedicated high prio wq for faster thermal mitigation. Rest is unchanged.
Signed-off-by: Francisco Franco [email protected]
Drop hpack, make it easier to use cabal-install (#3933)
Stack offers a relatively poor developer experience on this repository right now. The main issue is that build products are invalidated far more often than they should be. cabal-install is better at this, but using cabal-install together with hpack is a bit awkward.
Additionally, hpack isn't really pulling its weight these days. Current versions of stack recommend that you check your generated cabal file in, which is a huge pain as you have to explain to contributors to please leave the cabal file alone and edit package.yaml instead (the comment saying the file is auto-generated is quite easy to miss).
Current versions of Cabal also solve the issues which made hpack appealing in the first place, namely:
- common stanzas mean you don't have to repeat yourself for things like -Wall or dependencies
- tests are run from inside a source distribution by default, which means that if you forget to include something in extra-source-files you find out when you run the tests locally, rather than having to wait for CI to fail
- the globbing syntax is slightly more powerful (admittedly not quite as powerful as hpack's, but you can use globs like tests/**/*.purs now, which gets us close enough to hpack that the difference is basically negligible).
We do still need to manually maintain exposed-modules lists, but I am happy to take that in exchange for the build tool not invalidating our build products all the time.
This PR drops hpack in favour of manually-maintained Cabal files, so that it's easier to use cabal-install when working on the compiler. Stack is still the only officially supported build tool though - the CI, contributing, and installation docs all still use Stack.
Stack also works a little better now than it used to, because I think one of the causes of unnecessary rebuilds was us specifying optimization flags in the Cabal file. (Newer versions of Cabal warn you not to do this, so I think this might be a known issue). To ensure that release builds are built with -O2, I've updated the stack.yaml file to specify that -O2 should be used.
Patch 25.06.2022
Hero changes:
Abaddon - 1 skill self damage changed from 30/35/40/45/50/55/60 to 50 - 2 skill cast point changed from 0.452 to 0.3 - 2 skill duration changed from 13 to 15 - 3 skill movement speed changed from 25 to 15/17/19/21/23/25/25"
Abyssal Underlord - 2 skill root duration 1.0/1.2/1.4/1.5/1.6/1.7/2.1" to 1.0/1.2/1.3/1.4/1.5/1.6/1.7
Alkchmist - shard skill Berserk Potion bonus ms 0 to 30
Ancient Аpparation - 2 skill shard damage changed from 40 to 150 - 2 skill shard attack speed slow changed from 20 to 50
Dragon Knight - Fireball damage changed from 75 to 150
Furion - 1 skill Cooldown changed from 14/13/12/11/10/9/8 to 15 - 3 skill treant health changed from 550/650/750/850/950/1050/1150 to 550/600/650/700/750/850/1100 - 3 skill treant damage changed from 45/55/65/75/85/95/105 to 25/30/45/55/60/65/105 - 4 skill Cooldown changed from 50 to 65
Huskar - 4 skill health damage changed from 45 to 20/22/25/27/33/36/45
Chaos Knight - 3 skill crit chance changed from 30 to 25 - 3 skill crit max changed from 190/220/250/280/310/350/400 to 170/190/220/250/290/320/350 - 3 skill lifesteal changed from 30/35/40/45/50/55/60 to 25/30/35/40/45/50/55
Crystal Maiden's
- 3 skill mana per cast 10/15/20/25/30/35/40
Centaur - 1 skill stun duration changed from 2.0 to 2.0/2.1/2.2/2.3/2.4/2.5/2.6 Charon - Ultimate root can be dispelled
Dawnbreaker - 4 skill damage of pulsation changed from 30/50/70/80/90/100/110 to 50/75/100/125/150/175/200 - 4 skill heal of pulsation changed from 50/70/90/110/130/150/170 to 50/75/100/125/150/175/200 - 4 skill damage changed from 130/160/190/220/250/280/310 to 200/300/400/500/600/700/800
Medusa - Cold Blooded Cooldown changed from 6 to 10
Marci - 4 skill Cooldown changed from 110/100/90/80/70/60/50 to 80/75/70/65/60/55/50 - 4 skill between flurries changed from 1.75 to 1.55/1.50/1.45/1.40/1.35/1.30/1.25 - 4 skill attack Stacks changed from 3/4/5/6/7/8/9 to 4/5/6/7/8/9/10 - talent 25lvl silence duration changed from 1.5sec to 0.5sec
Primal Beast - Added to Angel Arena
Techies - Added techies skill frm dota
Undying - 4 skill Cooldown changed from 40 to 60 - Shard Cooldown ultimate changed from 35 to 20
Item changes:
Tome of heroes - stack start game 5 - delay before the start of the game 600sec - Max stack 40 - Cooldown stack 20sec - Cost changed from 1250 to 1500
Ethereal Blade - Craft changed from Ghost scepter + Recipe to Mystic staff + Ghost scepter + Recipe
Gem - cast range changed from 300 to 500 - active skill radius changed from 300 to 500 - duration active skill changed from 4 to 8 - Cooldown active skill changed from 12 to 30
Bosses:
Keymaster - now Ignores terrain - ms changed from 365 to 400
Creeps:
Satyr hand - 3 skill magic amplify changed from 50 to 20
Fixes: Storm Spirit 1 skill damage fixed Earth Spirit ultimate fixed Elder Titan ultimate fixed Abyssal Underlord ultimate fixed Omniknight 3 skill and ultimate fixed Pudge 3 skill fixed Templar Assassin 3 skill fixed Shadow Fiend aghanim fixed Hoodwink shard skill fixed Morphlimg 1 skill fixed Swift blink range fixed Overwhelming blink range fixed Arcane blink range fixed
Patch2
Hero changes:
Abaddon - 1 skill self damage changed from 30/35/40/45/50/55/60 to 50 - 2 skill cast point changed from 0.452 to 0.3 - 2 skill duration changed from 13 to 15 - 3 skill movement speed changed from 25 to 15/17/19/21/23/25/25"
Abyssal Underlord - 2 skill root duration 1.0/1.2/1.4/1.5/1.6/1.7/2.1" to 1.0/1.2/1.3/1.4/1.5/1.6/1.7
Alkchmist - shard skill Berserk Potion bonus ms 0 to 30
Ancient Аpparation - 2 skill shard damage changed from 40 to 150 - 2 skill shard attack speed slow changed from 20 to 50
Dragon Knight - Fireball damage changed from 75 to 150
Furion - 1 skill Cooldown changed from 14/13/12/11/10/9/8 to 15 - 3 skill treant health changed from 550/650/750/850/950/1050/1150 to 550/600/650/700/750/850/1100 - 3 skill treant damage changed from 45/55/65/75/85/95/105 to 25/30/45/55/60/65/105 - 4 skill Cooldown changed from 50 to 65
Huskar - 4 skill health damage changed from 45 to 20/22/25/27/33/36/45
Chaos Knight - 3 skill crit chance changed from 30 to 25 - 3 skill crit max changed from 190/220/250/280/310/350/400 to 170/190/220/250/290/320/350 - 3 skill lifesteal changed from 30/35/40/45/50/55/60 to 25/30/35/40/45/50/55
Crystal Maiden's
- 3 skill mana per cast 10/15/20/25/30/35/40
Centaur - 1 skill stun duration changed from 2.0 to 2.0/2.1/2.2/2.3/2.4/2.5/2.6
Dawnbreaker - 4 skill damage of pulsation changed from 30/50/70/80/90/100/110 to 50/75/100/125/150/175/200 - 4 skill heal of pulsation changed from 50/70/90/110/130/150/170 to 50/75/100/125/150/175/200 - 4 skill damage changed from 130/160/190/220/250/280/310 to 200/300/400/500/600/700/800
Medusa - Cold Blooded Cooldown changed from 6 to 10
Marci - 4 skill Cooldown changed from 110/100/90/80/70/60/50 to 80/75/70/65/60/55/50 - 4 skill between flurries changed from 1.75 to 1.55/1.50/1.45/1.40/1.35/1.30/1.25 - 4 skill attack Stacks changed from 3/4/5/6/7/8/9 to 4/5/6/7/8/9/10 - talent 25lvl silence duration changed from 1.5sec to 0.5sec
Primal Beast - Added to Angel Arena
Techies - Added techies skill frm dota
Undying - 4 skill Cooldown changed from 40 to 60 - Shard Cooldown ultimate changed from 35 to 20
Item changes:
Tome of heroes - stack start game 5 - delay before the start of the game 600sec - Max stack 40 - Cooldown stack 20sec - Cost changed from 1250 to 1500
Ethereal Blade - Craft changed from Ghost scepter + Recipe to Mystic staff + Ghost scepter + Recipe
Gem - cast range changed from 300 to 500 - active skill radius changed from 300 to 500 - duration active skill changed from 4 to 8 - Cooldown active skill changed from 12 to 30
Bosses:
Keymaster - now Ignores terrain - ms changed from 365 to 400
Creeps:
Satyr hand - 3 skill magic amplify changed from 50 to 20
Fixes: Storm Spirit 1 skill damage fixed Earth Spirit ultimate fixed Elder Titan ultimate fixed Abyssal Underlord ultimate fixed Omniknight 3 skill and ultimate fixed Pudge 3 skill fixed Templar Assassin 3 skill fixed Shadow Fiend aghanim fixed Hoodwink shard skill fixed Morphlimg 1 skill fixed Swift blink range fixed Overwhelming blink range fixed Arcane blink range fixed
[NTOS:SE] Properly handle dynamic counters in token
On current master, ReactOS faces these problems:
-
ObCreateObject charges both paged and non paged pool a size of TOKEN structure, not the actual dynamic contents of WHAT IS inside a token. For paged pool charge the size is that of the dynamic area (primary group + default DACL if any). This is basically what DynamicCharged is for. For the non paged pool charge, the actual charge is that of TOKEN structure upon creation. On duplication and filtering however, the paged pool charge size is that of the inherited dynamic charged space from an existing token whereas the non paged pool size is that of the calculated token body length for the new duplicated/filtered token. On current master, we're literally cheating the kernel by charging the wrong amount of quota not taking into account the dynamic contents which they come from UM.
-
Both DynamicCharged and DynamicAvailable are not fully handled (DynamicAvailable is pretty much poorly handled with some cases still to be taking into account). DynamicCharged is barely handled, like at all.
-
As a result of these two points above, NtSetInformationToken doesn't check when the caller wants to set up a new default token DACL or primary group if the newly DACL or the said group exceeds the dynamic charged boundary. So what happens is that I'm going to act like a smug bastard fat politician and whack the primary group and DACL of an token however I want to, because why in the hell not? In reality no, the kernel has to punish whoever attempts to do that, although we currently don't.
-
The dynamic area (aka DynamicPart) only picks up the default DACL but not the primary group as well. Generally the dynamic part is composed of primary group and default DACL, if provided.
In addition to that, we aren't returning the dynamic charged and available area in token statistics. SepComputeAvailableDynamicSpace helper is here to accommodate that. Apparently Windows is calculating the dynamic available area rather than just querying the DynamicAvailable field directly from the token. My theory regarding this is like the following: on Windows both TokenDefaultDacl and TokenPrimaryGroup classes are barely used by the system components during startup (LSASS provides both a DACL and primary group when calling NtCreateToken anyway). In fact DynamicAvailable is 0 during token creation, duplication and filtering when inspecting a token with WinDBG. So if an application wants to query token statistics that application will face a dynamic available space of 0.
A little notice
So yeah for about a couple weeks I might have a little bit of, say, personal problems to begin with and a lot of stuffs to do on my side (family stuffs, not going to even mention it 'cause it's annoying). I almost have little to no time in regards to working on the project that time. Luckily, I mostly done everything they have asked me to do for quite a while and now I can free bunch of time to work on this project. So yeah, expect me going all out tomorrow morning. And expect me spamming like 10-20 commits a day 'cause boi I have a lot of broken things to fix (landing page has terrible layout, dashboard aren't even designed fully yet (still has no fancy card kind of thing we discussed), and competition desc is a hot mess) and to implement API requests.
So yeah, I'm sorry for all the hassles.
Signed-off-by: Irvan Malik [email protected]
(Staticman) Stephen Cleary: > Hello. Steven. I'm new to ASP.NET , Sharepoint. C # has been in development for 5 years. Currently, I have a page with web user control (C #) pasted on a site page created by sharepoint designer, and when I click the button, the synchronization process works and it takes 3 hours. The maximum timeout of ALB of cloud service can be set only up to 1 hour, and the process of throwing a ping to ALB once every 30 minutes by asynchronous processing when the button is clicked has been added to c # in the button click event. When I actually run it, I get an error when sharepoint needs to set the async attribute, and I added the async attribute to the site page, but then I get another error. When I contacted Microsoft support, I was told that the sharepoint site page doesn't support the async attribute and I can do it by creating an application page, but I'm new to sharepoint and I'm also new to ASP.NET . I don't know how to write it on the application page. Can you tell me how to write it?
I'm afraid I don't have any experience with SharePoint. But in general, async
isn't going to help you if you have a 3 hour web request. You'll need to queue up the work into some kind of durable queue and then process it later. I'm not sure of the best way to do this with SharePoint/ALB, sorry.
docs: make "Software is not magic" link to an awesome blog post on this that expresses my thoughts on this
Found on https://jvns.ca/blogroll/ ("Julia Evans :: Blogs I like") (Julia Evans is a very kind hearted person that I follow on twitter and that I got to know through her awesome "zines" - https://wizardzines.com/ , some of which i have in physical form and lend to new learners or recruits to build a common ground, and have helped me regain a common ground)
this blogpost really goes into all the things that come with this mindset, including its pitfalls and how to deal with them i especially found his notes and experiences in "Single-shot debugging in kernel engineering" very interesting, which just TL;DR's to "developer != developer. kernel / low level developers and others who build foundations are wild and greatly share and religiously embrace this mindset (of traceability and unmagicness)"
koopas are the worst
remember when i said "fuck these guys"? i really mean it. found a dozen bugs that had to be fixed because of these stupid mfs
one more commit
So fuck yeah I finally made it Shit is written in savefile, read from savefile, shit working yo
Adding GPS Helper Mod with author permission
Adding GPS Helper Mod with author permission
Kaito [author] Jul 1 @ 1:20pm No, I had not considered it yet. Feel free to have a look. I'm kinda busy irl right now, so it would be a while before I could look.
In theory it all runs on the client, but you'll have to check the API.
Doc1979 Jun 28 @ 4:37pm Does this use server only commands? If not, have you thought of making it into a PluginLoader plugin? I was thinking of looking at your github to see if I could convert it myself, but if its already on your todo list, I'll wait
Deleting until things. Lol
We don't need this shit. Fuck yourself antoher time.
Adds a new sprite for the stacker (#196)
fuck you
Merge #18295: scripts: add MACHO lazy bindings check to security-check.py
5ca90f8b598978437340bb8467f527b9edfb2bbf scripts: add MACHO lazy bindings check to security-check.py (fanquake)
Pull request description:
This is a slightly belated follow up to #17686 and some discussion with Cory. It's not entirely clear if we should make this change due to the way the macOS dynamic loader appears to work. However I'm opening this for some discussion. Also related to #17768.
LD64
doesn't set the MH_BINDATLOAD bit in the header of MACHO executables, when building with -bind_at_load
. This is in contradiction to the documentation:
-bind_at_load
Sets a bit in the mach header of the resulting binary which tells dyld to
bind all symbols when the binary is loaded, rather than lazily.
The ld
in Apples cctools does set the bit, however the cctools-port that we use for release builds, bundles LD64
.
However; even if the linker hasn't set that bit, the dynamic loader (dyld
) doesn't seem to ever check for it, and from what I understand, it looks at a different part of the header when determining whether to lazily load symbols.
Note that our release binaries are currently working as expected, and no lazy loading occurs.
Using a small program, we can observe the behaviour of the dynamic loader.
Conducted using:
clang++ --version
Apple clang version 11.0.0 (clang-1100.0.33.17)
Target: x86_64-apple-darwin18.7.0
ld -v
@(#)PROGRAM:ld PROJECT:ld64-530
BUILD 18:57:17 Dec 13 2019
LTO support using: LLVM version 11.0.0, (clang-1100.0.33.17) (static support for 23, runtime is 23)
TAPI support using: Apple TAPI version 11.0.0 (tapi-1100.0.11)
#include <iostream>
int main() {
std::cout << "Hello World!\n";
return 0;
}
Compile and check the MACHO header:
clang++ test.cpp -o test
otool -vh test
...
Mach header
magic cputype cpusubtype caps filetype ncmds sizeofcmds flags
MH_MAGIC_64 X86_64 ALL LIB64 EXECUTE 16 1424 NOUNDEFS DYLDLINK TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK PIE
# Run and dump dynamic loader bindings:
DYLD_PRINT_BINDINGS=1 DYLD_PRINT_TO_FILE=no_bind.txt ./test
Hello World!
Recompile with -bind_at_load
. Note still no BINDATLOAD
flag:
clang++ test.cpp -o test -Wl,-bind_at_load
otool -vh test
Mach header
magic cputype cpusubtype caps filetype ncmds sizeofcmds flags
MH_MAGIC_64 X86_64 ALL LIB64 EXECUTE 16 1424 NOUNDEFS DYLDLINK TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK PIE
...
DYLD_PRINT_BINDINGS=1 DYLD_PRINT_TO_FILE=bind.txt ./test
Hello World!
If we diff the outputs, you can see that dyld
doesn't perform any lazy bindings when the binary is compiled with -bind_at_load
, even if the BINDATLOAD
flag is not set:
@@ -1,11 +1,27 @@
+dyld: bind: test:0x103EDF030 = libc++.1.dylib:__ZNKSt3__16locale9use_facetERNS0_2idE, *0x103EDF030 = 0x7FFF70C9FA58
+dyld: bind: test:0x103EDF038 = libc++.1.dylib:__ZNKSt3__18ios_base6getlocEv, *0x103EDF038 = 0x7FFF70CA12C2
+dyld: bind: test:0x103EDF068 = libc++.1.dylib:__ZNSt3__113basic_ostreamIcNS_11char_traitsIcEEE6sentryC1ERS3_, *0x103EDF068 = 0x7FFF70CA12B6
+dyld: bind: test:0x103EDF070 = libc++.1.dylib:__ZNSt3__113basic_ostreamIcNS_11char_traitsIcEEE6sentryD1Ev, *0x103EDF070 = 0x7FFF70CA1528
+dyld: bind: test:0x103EDF080 = libc++.1.dylib:__ZNSt3__16localeD1Ev, *0x103EDF080 = 0x7FFF70C9FAE6
<trim>
-dyld: lazy bind: test:0x10D4AC0C8 = libsystem_platform.dylib:_strlen, *0x10D4AC0C8 = 0x7FFF73C5C6E0
-dyld: lazy bind: test:0x10D4AC068 = libc++.1.dylib:__ZNSt3__113basic_ostreamIcNS_11char_traitsIcEEE6sentryC1ERS3_, *0x10D4AC068 = 0x7FFF70CA12B6
-dyld: lazy bind: test:0x10D4AC038 = libc++.1.dylib:__ZNKSt3__18ios_base6getlocEv, *0x10D4AC038 = 0x7FFF70CA12C2
-dyld: lazy bind: test:0x10D4AC030 = libc++.1.dylib:__ZNKSt3__16locale9use_facetERNS0_2idE, *0x10D4AC030 = 0x7FFF70C9FA58
-dyld: lazy bind: test:0x10D4AC080 = libc++.1.dylib:__ZNSt3__16localeD1Ev, *0x10D4AC080 = 0x7FFF70C9FAE6
-dyld: lazy bind: test:0x10D4AC070 = libc++.1.dylib:__ZNSt3__113basic_ostreamIcNS_11char_traitsIcEEE6sentryD1Ev, *0x10D4AC070 = 0x7FFF70CA1528
Note: dyld
also has a DYLD_BIND_AT_LAUNCH=1
environment variable, that when set, will force any lazy bindings to be non-lazy:
dyld: forced lazy bind: test:0x10BEC8068 = libc++.1.dylib:__ZNSt3__113basic_ostream
After looking at the dyld source, I can't find any checks for MH_BINDATLOAD
. You can see the flags it does check for, such as MH_PIE or MH_BIND_TO_WEAK here.
It seems that the lazy binding of any symbols depends on whether or not lazy_bind_size from the LC_DYLD_INFO_ONLY
load command is > 0. Which was mentioned in #17686.
This PR is one of Corys commits, that I've rebased and modified to make build. I've also included an addition to the security-check.py
script to check for the flag.
However, given the above, I'm not entirely sure this patch is the correct approach. If the linker no-longer inserts it, and the dynamic loader doesn't look for it, there might be little benefit to setting it. Or, maybe this is an oversight from Apple and needs some upstream discussion. Looking for some thoughts / Concept ACK/NACK.
One alternate approach we could take is to drop the patch and modify security-check.py to look for lazy_bind_size
== 0 in the LC_DYLD_INFO_ONLY
load command, using otool -l
.
ACKs for top commit: theuni: ACK 5ca90f8b598978437340bb8467f527b9edfb2bbf
Tree-SHA512: 444022ea9d19ed74dd06dc2ab3857a9c23fbc2f6475364e8552d761b712d684b3a7114d144f20de42328d1a99403b48667ba96885121392affb2e05b834b6e1c
Rebate Oracle is a Certificate of Authority (PoAwR consensus specific)
PoAwR modifies PoA by adding a rebate reflux particle accelerator to the flux capacitor so where as if my calculations are correct, when this baby hits 88 miles per hour... you're gonna see some serious shit. No for real though this Rebate Oracle accepts the rebates in a decentralized manner. So as to predict the future DAO implementation, creating a whole new ball game folks! Decentralied DAO hopping could occur, let's hash it!
This implementation of the Rebate Oracle is a Certificate of Authority
- guardian of the future polls
- creating a pool of rebate (rewards)
- the CA acts as a manager portal between the DAO, the flow of rebates, and the public.
- With the public functions on this CA, wallets can vote on the DAO address
- Voting is restricted to once per wallet, voters must carryout a selfless act and nominate an address in order to cast a vote on the next DAO(must be a contract)
- authorized parties will receive a portion of a rare governance coin, at this time each transfer between contract calls costs luck, and lucky number 7 sets the tone. -- more features not noted... more intel soon, it's time for dinner 🍕