3,183,368 events, 1,523,810 push events, 2,425,197 commit messages, 187,593,401 characters
Fix handling of embedded symbolic links (and history lesson).
The original filesystem release (4.2BSD) had no embedded sysmlinks. Historically symbolic links were just a different type of file, so the content of the symbolic link was contained in a single disk block fragment. We observed that most symbolic links were short enough that they could fit in the area of the inode that normally holds the block pointers. So we created embedded symlinks where the content of the link was held in the inode's pointer area thus avoiding the need to seek and read a data fragment and reducing the pressure on the block cache. At the time we had only UFS1 with 32-bit block pointers, so the test for a fastlink was:
di_size < (NDADDR + NIADDR) * sizeof(daddr_t)
(where daddr_t would be ufs1_daddr_t today).
When embedded symlinks were added, a spare field in the superblock with a known zero value became fs_maxsymlinklen. New filesystems set this field to (NDADDR + NIADDR) * sizeof(daddr_t). Embedded symlinks were assumed when di_size < fs->fs_maxsymlinklen. Thus filesystems that preceeded this change always read from blocks (since fs->fs_maxsymlinklen == 0) and newer ones used embedded symlinks if they fit. Similarly symlinks created on pre-embedded symlink filesystems always spill into blocks while newer ones will embed if they fit.
At the same time that the embedded symbolic links were added, the on-disk directory structure was changed splitting the former u_int16_t d_namlen into u_int8_t d_type and u_int8_t d_namlen. Thus fs_maxsymlinklen <= 0 (as used by the OFSFMT() macro) can be used to distinguish old directory formats. In retrospect that should have just been an added flag, but we did not realize we needed to know about that change until it was already in production.
Code was split into ufs/ffs so that the log structured filesystem could use ufs functionality while doing its own disk layout. This meant that no ffs superblock fields could be used in the ufs code. Thus ffs superblock fields that were needed in ufs code had to be copied to fields in the mount structure. Since ufs_readlink needed to know if a link was embedded, fs_maxlinklen gets copied to mnt_maxsymlinklen.
The kernel panic that arose to making this fix was triggered when a disk error created an inode of type symlink with no allocated data blocks but a large size. When readlink was called the uiomove was attempted which segment faulted.
static int ufs_readlink(ap) struct vop_readlink_args /* { struct vnode *a_vp; struct uio *a_uio; struct ucred *a_cred; } */ *ap; { struct vnode *vp = ap->a_vp; struct inode *ip = VTOI(vp); doff_t isize;
isize = ip->i_size;
if ((isize < vp->v_mount->mnt_maxsymlinklen) ||
DIP(ip, i_blocks) == 0) { /* XXX - for old fastlink support */
return (uiomove(SHORTLINK(ip), isize, ap->a_uio));
}
return (VOP_READ(vp, ap->a_uio, 0, ap->a_cred));
}
The second part of the "if" statement that adds
DIP(ip, i_blocks) == 0) { /* XXX - for old fastlink support */
is problematic. It never appeared in BSD released by Berkeley because as noted above mnt_maxsymlinklen is 0 for old format filesystems, so will always fall through to the VOP_READ as it should. I had to dig back through `git blame' to find that Rodney Grimes added it as part of ``The big 4.4BSD Lite to FreeBSD 2.0.0 (Development) patch.'' He must have brought it across from an earlier FreeBSD. Unfortunately the source-control logs for FreeBSD up to the merger with the AT&T-blessed 4.4BSD-Lite conversion were destroyed as part of the agreement to let FreeBSD remain unencumbered, so I cannot pin-point where that line got added on the FreeBSD side.
The one change needed here is that mnt_maxsymlinklen is declared as
an int' and should be changed to be
u_int64_t'.
This discovery led us to check out the code that deletes symbolic links. Specifically
if (vp->v_type == VLNK &&
(ip->i_size < vp->v_mount->mnt_maxsymlinklen ||
datablocks == 0)) {
if (length != 0)
panic("ffs_truncate: partial truncate of symlink");
bzero(SHORTLINK(ip), (u_int)ip->i_size);
ip->i_size = 0;
DIP_SET(ip, i_size, 0);
UFS_INODE_SET_FLAG(ip, IN_SIZEMOD | IN_CHANGE | IN_UPDATE);
if (needextclean)
goto extclean;
return (ffs_update(vp, waitforupdate));
}
Here too our broken symlink inode with no data blocks allocated and a large size will segment fault as we are incorrectly using the test that we have no data blocks to decide that it is an embdedded symbolic link and attempting to bzero past the end of the inode. The test for datablocks == 0 is unnecessary as the test for ip->i_size < vp->v_mount->mnt_maxsymlinklen will do the right thing in all cases.
The test for datablocks == 0 was added by David Greenman in this commit:
Author: David Greenman [email protected] Date: Tue Aug 2 13:51:05 1994 +0000
Completed (hopefully) the kernel support for old style "fastlinks".
Notes:
svn path=/head/; revision=1821
I am guessing that he likely earlier added the incorrect test in the ufs_readlink code.
I asked David if he had any recollection of why he made this change. Amazingly, he still had a recollection of why he had made a one-line change more than twenty years ago. And unsurpisingly it was because he had been stuck between a rock and a hard place.
FreeBSD was up to 1.1.5 before the switch to the 4.4BSD-Lite code base. Prior to that, there were three years of development in all areas of the kernel, including the filesystem code, from the combined set of people including Bill Jolitz, Patchkit contributors, and FreeBSD Project members. The compatibility issue at hand was caused by the FASTLINKS patches from Curt Mayer. In merging in the 4.4BSD-Lite changes David had to find a way to provide compatibility with both the changes that had been made in FreeBSD 1.1.5 and with 4.4BSD-Lite. He felt that these changes would provide compatibility with both systems.
In his words: ``My recollection is that the 'FASTLINKS' symlinks support in FreeBSD-1.x, as implemented by Curt Mayer, worked differently than 4.4BSD. He used a spare field in the inode to duplicately store the length. When the 4.4BSD-Lite merge was done, the optimized symlinks support for existing filesystems (those that were initialized in FreeBSD-1.x) were broken due to the FFS on-disk structure of 4.4BSD-Lite differing from FreeBSD-1.x. My commit was needed to restore the backward compatibility with FreeBSD-1.x filesystems. I think it was the best that could be done in the somewhat urgent circumstances of the post Berkeley-USL settlement. Also, regarding Rod's massive commit with little explanation, some context: John Dyson and I did the initial re-port of the 4.4BSD-Lite kernel to the 386 platform in just 10 days. It was by far the most intense hacking effort of my life. In addition to the porting of tons of FreeBSD-1 code, I think we wrote more than 30,000 lines of new code in that time to deal with the missing pieces and architectural changes of 4.4BSD-Lite. We didn't make many notes along the way. There was a lot of pressure to get something out to the rest of the developer community as fast as possible, so detailed discrete commits didn't happen - it all came as a giant wad, which is why Rod's commit message was worded the way it was.''
Reported by: Chuck Silvers Tested by: Chuck Silvers History by: David Greenman Lawrence MFC after: 1 week Sponsored by: Netflix
Balance Tweaks 5/16/21
• Added Bedroom Star, Unwarranted Citizen's Arrest, Weaponized Disease, and Imparter of Wisdom. • Most Vitality cards have had their attack power reduced by 1. • Most Hacker cards have had their attack power increased by 1. • Take Fate in Your Own Hands' cost has been reduced from 3 Emotion Emotion to 2 Emotion Emotion. • Sadness's cost has been reduced from 4 emotion emotion emotion to 1 emotion emotion emotion. • Samuel's Banked Favors has been replaced with Samuel, Falsify their Orders. • Daring Recruit now has Sluggish. • Hack's cost has been reduced by 1 generic. • Rampager's Fury has had its cost increased by 2 generic. • Standard Issue Big Gun once again has tradeable, but now doesn't automatically attach itself to a combatant when it enters the battlefield (so its tradeable ability must be used to attach it.) • Lust has been renamed to Infatuation (continuing the de-maturifying process), it also now forces you to reveal the chosen combatants. • The Musician cards which only bring combatant related keywords/effects now ignore non-combatant cards in your deck. • Many of the Multi-energy-type cards have been buffed.
- Reaction Augmenter now gives +2/+2 instead of +1/+1.
- Torment the Opponent now gives +2/+3 and -3/-2 instead of +1/+1 and -1/-1.
- High Kick now gives +4/-2 instead of +2/+0.
- Angel's Katana gives +3/+0 instead of +1/+0. (Not multi-typed but related)
- "Realizing I'm an Imitation of Humans" has had +0/+1 and +0/+2 added to its lest of choices.
- Vocal Training now has Append to Card instead of Append to Combatant.
- "Gonna Jump Now and be Free" has response.
- Musician's Laptop has tradeable.
- Cocaine Induced Inspiration has been changed to work like other drug cards, it also allows you to pay more energy to apply its effect to more cards in your deck.
- Street Performer's effect is now a may instead of a must.
- Amp up the Volume now has Advantageous.
Update bot.py
evened the odds of yes, no, and neutral responses. I need a girlfriend, fuck.
[SEMI-MODULAR] Adds more filtering options to the scanner gates (#5728)
- adds more options to the scanner gates
you can filter by more species you can now filter based on gender
-
WHAT DO YOU MEEEEEEAAAAANNN
-
BITCH
-
MORE LINTERS SHIT
-
Please Baby Just Work For Me
-
:slight_smile:
-
I Love Programming
-
variable name change
Getting back to work
- Wrote some PPU tests
- Realized that PPU performance was poopoo, fixed with some ugly Rc/RefCell hacks
- moved things around
- renamed the ""simple"" ppu that is not so simple anymore
- gameboy module is still completely fucked, hopefully i can get everything in a mostly working state soon
fixed SHHITS I HATE MYSELD Y ZANT TO FUCKUBNG DIE FF+IJR#PI@HJB EFHJWKLBJEHGCEWKGULEWKGUYWEKGUCEWKGUC#B<NM WEHKJBJK FUCK
Licence WTFPL
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
"10:10am. I've had difficulty falling asleep last night as my will is starting to concentrate. Stacking GANs independent as modules is my biggest hope for the future. I should be feeding them data and nurturing them.
The duality gap algo from the GMN paper for stabilizing their training is trustworthy. FID scores are sensitive to mode collapse so I should be able to trust them.
If it was before, stacking independent GAN modules would be suicide as their instability would compound, but if they are stable, they can form the foundation for an actual learning system.
If I passed them sequences, they would learn to predict the future.
The things I am missing for full AGI are some of the things the brain does. Maybe there will be different models better at pattern detection. Maybe those fast/slow weight ideas need revisiting.
But for the first time the foundations are now here. Long term memory formation can be induced with temporal bottlenecks and the saliency trick I've developed.
https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html
Oh, he says that 2021 has new stuff.
https://arxiv.org/abs/2102.11174 Linear Transformers Are Secretly Fast Weight Memory Systems
Maybe attention is really all I need.
Recent work proposed “linear Transformers” with constant size memory and time complexity linear in sequence length (Katharopoulos et al., 2020; Choromanski et al., 2021; Shen et al., 2018).
Huh they are not n^2 in sequence length? This is interesting!
Therefore, we introduce an improved update rule inspired by recent work on fast weight memories (Schlag et al., 2021).
https://arxiv.org/abs/2011.07831 Learning associative inference using fast weight memory
Oh, Schlag is the first author of both papers.
10:50am. Had to take a break. For the morning my focus will be on these two papers. Without a doubt, the answer to the brain's short term memory questions is an architecture like this.
If I can provide an answer to the issues Simon Thorpe posed, I will have everything I need to scale my ideas to something much higher, even if the hardware is not there.
The computational insufficiency of the present does not matter. What matters is that my ideas will give me a clear path of advancement.
11:20am. The first paper was interesting, although propagating through the weights will blow up the memory requirements. Let me go for the second paper which uses LSTMs + fast weights. Something like that should be the answer.
Lastly, we also evaluate Ba’s Fast Weights which attend to the recent past (JBFW; Ba et al. (2016a)) but were unable to find hyperparameters that converged.
Oh, lol.
In our experiments, the LSTM-based agent requires more episodes, a bigger network, and eventually overfit to the training graphs. The FWM-based agent however trains faster and generalises to randomly sampled graphs.
. The FWM agent (blue) has a slow LSTM with 32 hidden units and a fast weight memory of size 16 × 162 . We compare to LSTM agents with different sized hidden states. The largest LSTM has 4096 hidden units (red) which roughly matches the number of temporal variables of the FWM. The FWM has 14k trainable weights which is by far the lowest. The largest LSTM has 67.4M weights which is roughly 4814 times more than the FWM. The relative factor of each LSTM is added to the legend. All LSTMs take longer to train and eventually overfit on the training data. Due to the overfitting, the LSTM does not have to explore, which results in a higher total reward on training environments but a lower total reward on test environments.
If I am going to use an RNN, I should definitely use one that employs fast weights.
They use a fairly small weight matrix for the memories. That makes sense. They make a point that the human brain has a small working capacity as well.
11:45am. I don't know. It does not really matter at this stage.
Some kind of memory augmented RNN will come out in force in the future and become dominant, similar to how transformers are strong now. On neurochips, RNNs will be more efficient than feedforward nets whereas the story is the opposite on the GPUs.
11:50am. I am going to stop here. Enough of these papers. It is time for programming.
The way to advance in life is to practice agent cultivation. I might not have all the answers yet, but GAN training for prediction and my module ideas will stick around.
I am going to make a small RL agent that does even do unsupervised learning, but past that I am going to do my best to make agents that do. I will scale up. The modules I invented and the GAN duality gap method are what is going to make the whole thing scalable. I will believe in it.
The power of the agents will be my own power.
All I have to do to reach the answer at the end of the path is to build. It is time that I stop hesitating.
11:55am. I'll implement the uniform player after the breakfast."
shitty Eart_3 explosion
Planetary devastation explodes after sucking rocks It also deals damage in certain radius It also deals damage when exploding
bitches get stiches im so fucking tired
This is dash or some shit i dont care anymore
Added Testlog
Consider Changing - I simply just used the working code given by Pearson, honestly their code kinda sucks, do what you will but this repo just does what pearson expects.
Add with agony
This commit has been a learning experience that target can become painfully large and that git commits should be kept track off- especially with ggez!
- Add basic code skeleton
- Add TODO(s) for later
- Add ggez import
- Remove evil target folder
- Add one (1) sleepless night and suffering
This is a testing version for RDB.
RDB wasn't stable and it was experimental. This time we need to push RDB forward and make it stable. You can think of RDB as a stupid but super lightweight load balancer. RDB will never be able to distribute tasks wisely among CPUs as same as CFS load balancer does. However, CFS needs many stats and task accounting in order to choose wisely – RDB doesn’t. So RDB shines when there is simple load balancing to do. It tries to take a decision based on interactivity score (IS) which is already been used in CacULE, so no overhead stats counting.
Previous RDB works good in certain situations, and many users don’t like it because it didn’t work well with them. However, others had good experience with it. Personally, RDB was working great on my OpenSuse Tumbleweed/Leap, but when I tried it on Ubuntu it was a disaster. So these new changes are made to make RDB stable on most OS and on different configs.
The first change I made that makes RDB works on Ubuntu as
good as it was on OpenSuse is changing try_pull_any
in
active_balance
to try_push_any
. This change could effect
negatively the RDB performance with users where previous RDB
was good on their machines. Therefore, this change must be
tested. I have added 4 syctls for rdb testing. And try_push_any
is included in testing where testers can enable/disable. Below
is an explanation of each sysctl.
Purpose:
To choose between try_pull_any
and try_push_any
Possible values (default 0):
0: means RDB uses try_pull_any
as same as old version
1: to run try_push_any
instead of try_pull_any
I recommend testing this first.
Purpose: The trigger_load_balance runs in every tick. For High HZ values, the load balance could be overwhelming. RDB load balance includes locking which can reduce the performance. The balance guard can help to avoid running load balance on every tick. For example, rdb_balance_guard_ms=3 will only run load balance every 3ms. Setting rdb_balance_guard_ms depends on HZ. If you want load balancer run every 2ms while HZ=500 then it is not needed and better to disable rdb_balance_guard_ms since 500HZ already (1000ms/500HZ = 2ms). However, if you have 1000HZ and want to avoid load balancer from running every 1ms, you could set rdb_balance_guard_ms=4ms for example to make load balancer run every 4ms. Less rdb_balance_guard_ms values (or 0 to disable) could make sure tasks are balanced ASAP, but with the cost of locking/blocking time. High rdb_balance_guard_ms values can relax balancing locking but with the cost of imbalanced workload for that period of time (i.e. if rdb_balance_guard_ms=100ms) there will be no balancing for 100ms (except for newidle_balance which is not effected by rdb_balance_guard_ms).
Possible values (default 3): 0: to disable non 0: to enable with the value set
I recommend first set this to 0, then test rdb_try_push_any_enable. After you chose which is better (push or pull), then try tune rdb_balance_guard_ms.
Purpose: To reduce the calling of the following functions to the specified HZ if the runqueue (rq) has only 1 runnable task. - rq_lock - arch_scale_thermal_pressure - update_thermal_load_avg - task_tick - calc_global_load_tick - psi_task_tick - perf_event_task_tick - trigger_load_balance
Notice that trigger_load_balance is also controlled by
rdb_balance_guard_ms. scale_down_hz_value doesn’t scale down the
whole HZ, nor affects interrupts. It is just an if statement
to return from calling the above functions (see
core.c:scheduler_tick()). Since RDB load balancer is simple,
we can actually mess with scheduler_tick wishing to increase
performance. There is another reason to consider this feature
in case of try_push_any_enable
is enabled.
For example, if we have cpu0 and cpu1. cpu0 has 3 tasks and
cpu1 has only 1 task. Assuming scale_down_hz_value=2, cpu0
will not skip any tick since it has more than 1 task (cpu0
wont be effected by scale_down_hz_value), however, cpu1 is
effected because it has only 1 task in its rq. Cpu0 will
call the above functions only 1 time per second since the
scale_down_hz_value=1 means 1HZ = 1 call/s. Now cpu1 is not
going to call trigger_load_balance during 1s to pull tasks
from cpu0. That’s why I believe try_push_any_enable=1
works
together with scale_down_hz_value
because cpu0 can push tasks
to cpu1 while cpu1’s HZ is scaled down. The benefits are less
locking time since cpu1 doesn’t bother other cpus asking to
pull tasks, and cpu1 can save more time not calling the above
functions since it has only 1 task in rq. The feature runs as
following: assume HZ=500, if scale_down_hz_value=100, then
500 / 100 = 5. The above functions will be skipped 5 – 1 = 4
times. They will be called once every 5 ticks. Setting
scale_down_hz_value=500 is same as scale_down_hz_value=0
since 500 / 500 = 1, and then 1 – 1 = 0, so no skip.
Notice that this is an integer division! So the following values when HZ=500 are the same: scale_down_hz_value=300 → 500/300=1.6 =1 scale_down_hz_value=400 → 500/400=1.25 =1 scale_down_hz_value=250 → 500/500=1 =1
However scale_down_hz_value=250 → 500/250 =2
So you need to consider the integer division first, then divide HZ/(skip) to get the actual HZ
For example, you might assume scale_down_hz_value=84 would mean 1000ms/84hz = 11.9ms! But that is incorrect since 500HZ/84 = 5.952380952 by taking the floor it is going to be 5 skips, then 500/5 = 100, therefore you are actually scaling it down to 100HZ i.e. 10ms.
Possible values (default 0): 0: disabled the feature 1: tick per second 2-2000: scaled down hz based on original HZ and integer division.
Purpose: rdb disables autogroup, which can be a problem because sometimes autogroup provides better interactivity since it gathers tasks in groups and count their vruntime as one entity. I have made something similar to autogroups where update_curr updates all parents with delta_fair (func. update_parents()). Also normalize_lifetime updates the diff of vruntime to all parents. The use is in calc_interactivity where instead of taking cn.vruntime we take the average vruntime from the task to its highest root parent (func. average_vruntime). Since every parent/task has the total of vruntime of its children (due to update_curr updating parents) the average is taking of the task would take on the account that if this task belongs to a very busy group (based on fork hierarchy) or not. The task will have higher vruntime average if it is a child of busy group (going thru the path to root parent).
Example: Let’s assume the following task tree:
systemd (10) ├── calculator (1) ├── fakeroot (6) │ ├── make1 (3) │ ├── make2 (1) │ ├── make3 (0) │ └── make4 (2) └── sh (3) └── fish (3) └── cat (3)
These tasks could be distributed on multiple cpus. Let’s assume the two tasks cat, and make3 on the same rq. The cat has IS=4 while make3’s IS=0. With average_vruntime_enable=0, make3 will run since its IS is less than cat’s IS. However make3 is a task in one group doing when task which is compiling. 4 makes are doing compiling while cat is only when interactive task that is competing with other 4 tasks. With average_vruntime_enable=1, The average IS is for cat and make3 is calculated as following:
make3 = (0+6+10)/3 = 16/3 = 5.3 fakeroot=6 since it is the sum of its children, the same thing for other parents.
cat = (3+3+3+10)/4 = 19/4 = 4.75
So cat will be picked to run instead of make3
Potential of improper decision: Assuming that cat and make3 are the only 2 tasks in cpu0. make3 could be starved because its group is doing great job on cpu1! So, it is better to consider averaging with respect of cpus too. I will try to enhance this feature on future, but for now let’s test and see if this feature provide any better.
Note while testing average_vruntime_enable: Moving from average_vruntime_enable=0 to average_vruntime_enable=1 is ok since parents have 0 updates, however, after you tested with average_vruntime_enable=1 and while testing or after few seconds you turned average_vruntime_enable to 0, there will be an issue since most grouped tasks were been restricted to their group but suddenly you opened the freedom to all tasks. Most likely when compiling with average_vruntime_enable=1 then switch to average_vruntime_enable=0 while compiling will cause system slowdown. So it is not recommended when testing to switch from average_vruntime_enable 1→0. I recommend a reboot to switch back from 1 to 0. I don’t want you to have incorrect conclusion due to this switch from 1 to 0. The safest way is test with average_vruntime_enable=0, then do: sysctl -w kernel.average_vruntime_enable=1 | tee -a /etc/sysctl.conf
and reboot to start a fresh test with average_vruntime_enable=1
Lastly
sched_interactivity_threshold value can also affect the RDB behavior. You can tune this after you have tunned the rdbs values.
Please test which values give better response and performance. And please compare your best RDB tunned with previous RDB, and also compare it with CacULE+autogroup.
Please take your time. This will be a very important test to make RDB stable and make it usable.
Thank you
Added 6 more missions. Added Pumpling, Magic Candles and Blessed Children as enemies (and boss). Remade Wolf enemy to now move in shape of L, like chess Knight (Horse). Made enemy health bars red and now enemies without mana have no mana bars. Fixed say centering. Added many more trash items as loot. Added gold as loot (now displays as an item you can loot). Added the Fields background. Added Haymaker and Fox Companion spells for Knight and respectively Ranger. Items now display their flavor text and their title centering is fixed.
Oh god
Yeah this happens if the devteam doesn't think of everything. Stupid me, why didn't I remember that. Oh well, at least I fixed it now, but I really shouldn't be such an incompetent variant developer...
[codegen] split out backend-specific information from NativeFunction in the model (#57361)
Summary: Pull Request resolved: pytorch/pytorch#57361
Data model change in the codegen, which splits backend-specific information out of NativeFunction
Currently in the codegen, native_functions.yaml has backend-specific information about each operator that is encoded directly into the data model, in the NativeFunction
object. That's reasonable, since the native_functions.yaml is the source of truth for information about an operator, and the data model encodes that information into types.
Now that external backends can use the codegen though, that information is technically incomplete/inaccurate. In another PR, I tried patching the information on the NativeFunction
object with the additional external information, by updating the dispatch
entry to contain the external backend kernel name and dispatch key.
Instead, this PR tries to split out that information. The NativeFunction
class contains all information about an operator from native_functions.yaml that's backend-independent and is known never to change regardless of what extra information backends provide. We also build up a backend "index", which is basically a mapping from [backend] -> [backend-specific-metadata]. Reading in an external backend yaml just involves updating that index with the new backend.
There were a few places where NativeFunction
used the dispatch table directly, that I encoded as properties directly on the NativeFunction object (e.g. is_abstract
). They were mostly around whether or not the operator has a composite kernel, which isn't something that's going to change for any external backends.
This has a few advantages:
- We can more easily re-use the existing logic in
native_function.py
andregister_dispatch_key.py
for both native and external backends, since they both involve a NativeFunction + a particular backend index - The data in the data model will be the same regardless of how the codegen is run. Running the codegen with a new external backend doesn't change the data inside of NativeFunction or an existing backend index. It just adds a new index for that backend.
- There are several of codegen areas that don't care about backend-specific information: mostly the tracing and autograd codegen. We can reason about the codegen there more easily, knowing that backend-specific info is entirely uninvolved.
An alternative to this split would be to augment the NativeFunction objects with external backend information at the time that we create them. So the external codegen could read both native_functions.yaml and the external backend's yaml at the same time, and construct a NativeObject with a full dispatch table (including the XLA entry), and the correct setting of structured (taking into account both yamls). One disadvantage to this approach is that NativeFunction objects now contain different stuff depending on how you ran the codegen, and you have to make sure that any changes to the codegen can properly handle all the different variants.
Removed 3 classes, which are used by the external codegen:
- ExternalBackendFunction
- ExternalBackendFunctionsGroup
- ExternalBackendMetadata
And added two new ones:
- BackendIndex
- BackendMetadata
BackendIndex
contains any info that's specific to that backend, plus a mapping from operator names to backend specific metadata about the operator. One example of backend-specific info that's not operator-dependent is the fact that XLA prefers to implement functional kernels instead of out kernels (and so when they eventually mark an op as structured, they're going to mark the functional op and not the out op).
BackendMetadata
contains info specific to an (operator, backend) pair. Right now, that's just (a) the name of the kernel, and (b) whether or not that operator is structured.
I wanted to get this PR up earlier so I could get feedback, but there are a few things I want to call out:
Dealing with structured
.
This PR separates out the notion of structured
into two bits of information:
- Does [operator] have a meta() function. This is backend-agnostic, and is represented by the
structured
property onNativeFunction
, same as before. This is used, e.g., to decide what signatures to add toMetaFunctions.h
. - Does [operator, backend] have an impl() function. This is backend dependent; even though technically all in-tree backends are forced to write impl() functions for an operator when we port the op to structured in native_functions.yaml, out-of-tree backends can decide to opt in independently. This is represented as a property on
BackendMetadata
. This is used in most other cases, e.g. inRegisterDispatchKey
when we're deciding whether or not to gen a structured or unstructured wrapper.
I also baked is_structured_dispatch_key
directly into each BackendIndex. So for operators marked "structured" in native_functions.yaml, their corresponding CPU/CUDA BackendIndex entries will be marked structured, and all others (except for potentially external backends) will not.
I ended up trying to deal with structured
in this change since it's technically backend dependent (XLA can opt kernels into structured separately from in-tree ops), but that may have been too ambitious: it's technically not relevant until we actually add support for structured external kernels. If it's not clear that this is the right path for dealing with structured and we want to push that off, I'm fine with backing out the bits of this PR that make structured
backend-dependent. I don't see anything too controversial related to structured in the change, but I tried to call out any areas in the comments
Localizing the fact that external backends follow Dispatcher convention.
Another thing that's sort of backend specific that I didn't totally address in this PR is the fact the fact that in-tree backends follow the Native API while external backends follow the Dispatcher API. I painted over that in native_functions.py
by adding a helper, kernel_signature
, that takes in a native function and gives you the "correct" signature for the specified backend- NativeSignature for in-tree backends, and DispatcherSignature for out-of-tree backends. In order to make that fully useable though, we'll need NativeSignature
and DispatcherSignature
to have matching interfaces. I didn't bother with that in this PR, which is why gen_external_aten_fallbacks.py
still has a bunch of direct references to the dispatcher API. Thinking of adding it in a later PR but wanted to see if anyone has other opinions.
Maybe is_external()
shouldn't even be a property on the BackendMetadata, and anything the codegen does that requires asking for that information should just be better abstracted away.
Thoughts on the BackendIndex
/ BackendMetadata
breakdown.
One thing that's annoying right now is that to query for various pieces of metadata, you call helper functions like backend_index.structured(f)
, which queries that particular backend and tells you if that specific NativeFunctionGroup is structured for that backend. It has to return an Optional[bool]
though, since you have to handle the case where that operator doesn't have a kernel for that backend at all. So users of those helpers end up with a bunch of optionals that they need to unpack, even if they know at some point that the result isn't None. I think it would be easier instead to just store the NativeFunction object as a field directly on the BackendMetadata. Curious if there are any other opinions on a better way to model it though.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D28474362
Pulled By: bdhirsh
fbshipit-source-id: 41a00821acf172467d764cb41e771e096542f661
oh vey
new stuff the Yellow screen stage here!
propose Guruku Tersayang but it's Funkin. what if Boyfriend, Girlfriend, and Pico got a brand new university and is actually comfortable to go through? damnit! well, uh, so Pico voice effect I've learnt from KawaiSprite and bbpanzu video https://youtu.be/ODFOpoXjzaA here has trouble. NO, it wasn't anything. but here the effect Pico has is only good for rapping. and unfortunately is not compatible with vocal lexical song like most, if not rapping. yeah idk what to do... I'm sorry Tom Fulp, ninja, GenoX, bbpanzu, and uh... etc. my result is terrible.
THE KICKSTARTER GOING TO CLOSE IN HOURS LEFT https://www.kickstarter.com/projects/funkin/friday-night-funkin-the-full-ass-game it didn't reach the max goal 😭😭, but hey! we got them! 😁😁😁
Logistic Regression and Decision Tree Used
AllLife Bank is a US bank that has a growing customer base. The majority of these customers are liability customers (depositors) with varying sizes of deposits. The number of customers who are also borrowers (asset customers) is quite small, and the bank is interested in expanding this base rapidly to bring in more loan business and in the process, earn more through the interest on loans. In particular, the management wants to explore ways of converting its liability customers to personal loan customers (while retaining them as depositors).
A campaign that the bank ran last year for liability customers showed a healthy conversion rate of over 9% success. This has encouraged the retail marketing department to devise campaigns with better target marketing to increase the success ratio.
You as a Data scientist at AllLife bank have to build a model that will help the marketing department to identify the potential customers who have a higher probability of purchasing the loan.
Objectives: To predict whether a liability customer will buy a personal loan or not. Which variables are most significant. Which segment of customers should be targeted more. Data Dictionary: ID: Customer ID Age: Customer’s age in completed years Experience: #years of professional experience Income: Annual income of the customer (in thousand dollars) ZIP Code: Home Address ZIP code. Family: the Family size of the customer CCAvg: Average spending on credit cards per month (in thousand dollars) Education: Education Level. 1: Undergrad; 2: Graduate;3: Advanced/Professional Mortgage: Value of house mortgage if any. (in thousand dollars) Personal_Loan: Did this customer accept the personal loan offered in the last campaign? Securities_Account: Does the customer have securities account with the bank? CD_Account: Does the customer have a certificate of deposit (CD) account with the bank? Online: Do customers use internet banking facilities? CreditCard: Does the customer use a credit card issued by any other Bank (excluding All life Bank)?
Trials, Bastards, Knights and the Judge
+New Stuff -Rulers with a High Court and Grand Judge can hold trials for those imprisoned -Acknowledge hidden bastard children (fixed and actually working this time) -Feud events between you and other rulers -Grand Judge Events -Mech Suit battle events simplified to one file -rulers that aren't your vassals will now pay back their loans (oops) -Knight stuff -castles and cities should start with more buildings already built -Lots of localisation stuff -minor cultural stuff -event pic stuff -more stuff for Blessed Singularity religion
Created Text For URL [www.timeslive.co.za/news/south-africa/2021-05-17-man-who-hacked-girlfriend-and-children-to-death-gets-six-life-terms/]
omg github you are horrible and i hate you
you should feel bad rn
Update empty.json
{ "24-Points star": "", "A problem occurred while removing undo history. It": "", "About": "", "Active": "", "Add Borders": "", "Aden": "", "Advanced": "", "All": "", "Alpha": "", "Alpha:": "", "Anonymous": "", "Anti aliasing": "", "Application markup may have changed,": "", "Arial": "", "Arrow": "", "ArrowDown": "", "ArrowLeft": "", "ArrowRight": "", "ArrowUp": "", "Author:": "", "Auto Adjust Colors": "Auto Adjust Colours", "Auto Kerning": "", "Average:": "", "Backspace": "", "Base": "", "Basic": "", "Black and White": "", "Blue": "", "Blue channel:": "", "Blueprint": "", "Blur Radius:": "", "Blur Tool": "", "Blur power:": "", "Borders": "", "Bottom": "", "Bottom to Top": "", "Bounds:": "", "Box": "", "Box Blur": "", "Box blur": "", "Brightness": "", "Brightness:": "", "Bulge/Pinch Tool": "", "Burn": "", "Can not animate 1 layer.": "I can not animate just 1 layer, you need more layers.", "Can not find previous layer.": "I can not find the previous layer.", "Cancel": "", "Canvas Size": "", "Canvas size": "", "Center": "", "Center x:": "", "Center y:": "", "Center:": "", "Change Composition": "", "Change Layer Details": "", "Change Opacity": "", "Channel:": "", "Circle": "", "Clarendon": "", "Clear": "", "Clear Selection": "", "Clone Tool": "", "Clone count:": "", "Clone tool disabled for resized image. Sorry.": "I am sorry, the Clone tool is disabled for resized images.", "Cloned edges": "", "Color #": "Colour #", "Color Corrections": "Colour Corrections", "Color Palette": "Colour Palette", "Color Zoom": "Colour Zoom", "Color alpha value can not be zero.": "The Colour alpha value can not be set to zero.", "Color to Alpha": "Colour to Alpha", "Color zoom": "Colour zoom", "Color:": "Colour", "Colors": "Colours", "Colors:": "Colours", "Common Filters": "", "Composition": "", "Composition:": "", "Content Fill": "", "Contrast": "", "Contrast:": "", "Convert to Raster": "Convert to a Raster", "Copy Selection": "", "Copy to Clipboard": "", "Copy:": "", "Courier": "", "Crop Tool": "", "Crop on rotated layer is not supported. Convert it to raster to continue.": "Sorry you can not Crop a rotated layer. Either remove the rotation, crop it, then re-rotate or convert it to a raster to continue.", "Ctrl+A": "", "Ctrl+C": "", "Ctrl+V": "", "Ctrl+Y": "", "Ctrl+Z": "", "Ctrl-P": "", "Current": "", "Current Color Preview": "Current Colour Preview", "Custom": "", "Data URL": "", "Data URL:": "", "Decrease": "", "Decrease Color Depth": "Decrease Colour Depth", "Degree:": "", "Del": "", "Delete": "", "Delete Selection": "", "Denoise": "", "Desaturate Tool": "", "Description:": "", "Deutsch": "", "Differences": "", "Differences Down": "", "Direction:": "", "Dither": "", "Dithering:": "", "Dominant color:": "Dominant color:", "Dot Screen": "", "Down": "", "Duplicate": "", "Duplicate Layer": "", "Dynamic": "", "Edge": "", "Edit": "", "Edit text...": "", "Effect browser": "", "Effects": "", "Effects browser": "", "Email:": "", "Emboss": "", "Empty selection": "", "Empty selection or type not image.": "Your selection is Empty or the asset needs converting to a raster as it is not an image.", "Enable guides:": "", "Enable snap:": "", "End": "", "English": "", "Enrich": "", "Enter": "", "Erase Tool": "", "Erase on rotate object is disabled. Sorry.": "I am sorry you cannot Erase on a rotated object. Remove the rotation, use the eraser, then reapply the rotation.", "Error": "", "Error connecting to service.": "", "Error loading the list of fonts from Google.": "", "Error registering service worker": "", "Error: can not find filter:": "", "Error: can not find layer with id:": "", "Error: missing details event target": "", "Error: unknown layer type:": "", "Esc": "", "Escape": "", "Español": "", "Exit confirmation:": "", "Expand edges": "", "Exponent:": "", "Export": "", "External": "", "Factor:": "", "File": "", "File name:": "", "File size:": "", "Fill": "", "Fill Tool": "", "Fit": "", "Fit Window": "", "Flatten Image": "", "Flip": "", "FloydSteinberg-serpentine": "", "Font": "", "Français": "", "Full HD, 1080p": "", "Full Screen": "", "Full layers data": "", "Gap:": "", "Gaussian Blur": "", "Gif delay:": "", "Gingham": "", "GitHub:": "", "Gradient Radius:": "", "Grains": "", "Graphics Interchange Format": "", "Gray": "", "Grayscale": "", "Greek": "", "Green": "", "Green channel:": "", "Greyscale:": "", "Grid": "", "Grid on/off": "", "Guides": "", "Guides enabled.": "", "H Radius:": "", "H. Align:": "", "Heatmap": "", "Height (%):": "", "Height:": "", "Help": "", "Helvetica": "", "Hermite": "", "Hex": "", "Histogram": "", "Histogram:": "", "Home": "", "Horizontal": "", "Horizontal Alignment": "", "Horizontal blur:": "", "Horizontal:": "", "Hue": "", "Hue Rotate": "", "Hue:": "", "Image": "", "Image data with multi-layers. Can be opened using miniPaint -": "", "Impact": "", "Increase": "", "Information": "", "Inkwell": "", "Insert": "", "Insert guides": "", "Insert:": "", "Instagram Filters": "", "Invalid Hex Code": "", "Italiano": "", "JPG/JPEG Format": "", "Kerning:": "", "Key-Points": "", "KeyU": "", "Keyboard Shortcuts": "", "Keyword:": "", "Lanczos": "", "Language": "", "Last modified": "", "Layer": "", "Layer details": "", "Layer is not compatible with resize": "This layer is not compatible with the resize feature.", "Layer is vector, convert it to raster to apply this tool.": "This layer is vector, convert it to a raster to apply this tool.", "Layers": "", "Layers:": "", "Left": "", "Left to Right": "", "Level:": "", "Levels:": "", "Lietuvių": "", "Lo-fi": "", "Luminance:": "", "Luminosity": "", "Magic Eraser Tool": "", "Merge Down": "", "Merge Layers": "", "Merged": "", "Metrics": "", "Middle": "", "Missing at least 1 size parameter.": "You have at least 1 size parameter missing.", "Missing permissions to write to Clipboard.cc": "", "Mode:": "", "Module function not found.": "", "Modules class not found:": "", "Monospace": "", "Mosaic": "", "Mouse:": "", "Move": "", "Move Layer": "", "Move down": "", "Move up": "", "Name:": "", "Needs at least 2 layers.": "You needs at least 2 layers to do this.", "Negative": "", "New": "", "New Brush Layer": "", "New Ellipse Layer": "", "New File": "", "New Gradient Layer": "", "New Layer": "", "New Line Layer": "", "New Pencil Layer": "", "New Rectangle Layer": "", "New Text Layer": "", "New file": "", "New from Selection": "", "New layer": "", "New width can not be smaller then current width": "The new width can not be smaller then current width.", "Night Vision": "", "None": "", "Nothing is selected.": "You have not selected anything. Use the selection tool to select an asset.", "Offset X:": "", "Offset Y:": "", "Oil": "", "Ok": "", "Online image editor.": "", "Opacity": "", "Opacity:": "", "Open": "", "Open Data URL": "", "Open Directory": "", "Open File": "", "Open File Data URL": "", "Open File URL": "", "Open File Webcam": "", "Open Image": "", "Open JSON File": "", "Open Test Template": "", "Open URL": "", "Open data URL": "", "Open from Webcam": "", "Original Size": "", "PNGTOSVG - Convert Image to SVG": "", "PageDown": "", "PageUp": "", "Palette": "", "Parameter #1:": "", "Parameter #2:": "", "Paste": "", "Pencil": "", "Percentage:": "", "Pixels:": "", "Placeholder comment for color channels": "Placeholder comment for colour channels", "Placeholder comment for color picker": "Placeholder comment for colour picker", "Placeholder comment for color swatches": "Placeholder comment for colour swatches", "Portable Network Graphics": "", "Português": "", "Position:": "", "Power:": "", "Preview": "", "Previous": "", "Previous layer must be image, convert it to raster to apply this tool.": "The previous layer must be image, convert it to a raster to apply this tool.", "Print": "", "Quality:": "", "Quick Load": "", "Quick Save": "", "REMOVE.BG - Remove Image Background": "", "Radial": "", "Radial gradient": "", "Radius:": "", "Range:": "", "Red": "", "Red channel:": "", "Redo": "", "Remove all": "", "Rename": "", "Rename Layer": "", "Rendered with errors.": "", "Rendering...": "", "Replace Color": "Replace Colour", "Replace color": "Replace colour", "Replacement:": "", "Report Issues": "", "Reset": "", "Resize": "", "Resize Boundary": "", "Resize Layer": "", "Resize Layers": "", "Resize Text Layer": "", "Resized as background": "", "Resized:": "", "Resolution:": "", "Restore Alpha": "", "Right": "", "Right angle:": "", "Right to Left": "", "Rotate": "", "Rotate Layer": "", "Rotate is not supported on this type of object. Convert to raster?": "Rotate is not supported on this type of asset. Would you like to convert it to a raster?", "Rotate left": "", "Rotate:": "", "Ruler": "", "SQUOOSH - Compress and Compare Images": "SQUOOSH - Compress and Compare Images (assets)", "Safe search:": "", "Saturate": "", "Saturation": "", "Saturation:": "", "Save (Export)": "", "Save As": "", "Save As Data URL": "", "Save as": "", "Save as type:": "", "Save layers:": "", "Scaling up is not supported in Hermite, using Lanczos.": "", "Scroll down": "", "Scroll up": "", "Search": "", "Search Images": "", "Search for Font": "", "Select All": "", "Select Text Layer": "", "Select object tool": "", "Selected": "", "Selection Tool": "", "Sensitivity:": "", "Separated": "", "Separated (original types)": "", "Sepia": "", "Set Image Size": "Set the image (asset) size", "Settings": "", "Shadow": "", "Shadow:": "", "Shapes": "", "Sharpen": "", "Sharpen Tool": "", "Sharpen:": "", "Shortcut Key:": "", "Show / Hide": "", "Show file size:": "", "Simple": "", "Size is too big, max": "", "Size:": "", "Skip - layer must be image.": "", "Solarize": "Solarise", "Sorry, cold not load getUserMedia() data:": "Sorry, I could not load getUserMedia() data:", "Sorry, image could not be loaded.": "Sorry, the image (asset) could not be loaded.", "Sorry, image could not be loaded. Try copy image and paste it.": "Sorry, the image (asset) could not be loaded. Try to copy the image (asset) and then paste it.", "Sorry, image is too big, max 5 MB.": "Sorry, the image (asset) is too big, the maximum is 5 MB.", "Source coordinates saved.": "The source coordinates have been saved.", "Source is empty, right click on image or use long press to save source position.": "The source is empty, right click on image (asset) or use a long press to save source position.", "Sprites": "", "Square": "", "Stream:": "", "Strength:": "", "Strict": "", "TINYPNG - Compress PNG and JPEG": "", "Tab": "", "Tag Image File Format": "", "Tahoma": "", "Target:": "", "The quick brown fox jumps over the lazy dog.": "", "Theme": "", "There": "", "There are no layers behind.": "", "There is only 1 layer.": "", "Thick guides:": "", "This layer must contain an image. Please convert it to raster to apply this tool.": "This layer must contain an image (asset). Please convert it to a raster to apply this tool.", "Tilt Shift": "", "Times New Roman": "", "Toaster": "", "Toggle": "", "Toggle Color Channels": "Toggle Colour Channels", "Toggle Color Picker": "Toggle Colour Picker", "Toggle Menu": "", "Toggle Swatches": "", "Tools": "", "Top": "", "Top to Bottom": "", "Total pixels:": "", "Translate": "", "Translate Layer": "", "Translate error, can not find dictionary:": "", "Transparency background:": "", "Transparent:": "", "Trim": "", "Trim Layers": "", "Trim borders:": "", "Trim layer:": "", "Trim white color?": "Trim the white colour?", "Type:": "", "Türkçe": "", "Undo": "", "Unique colors:": "Unique colours:", "Units": "", "Up": "", "Update": "", "Update Brush Layer": "", "Update Pencil Layer": "", "Update guides": "", "Use Ctrl+V keyboard shortcut to paste from Clipboard.": "Use the Ctrl+V keyboard shortcut to paste from the Clipboard.", "V Radius:": "", "V. Align:": "", "Valencia": "", "Verdana": "", "Version:": "", "Vertical": "", "Vertical Alignment": "", "Vertical blur:": "", "Vertical:": "", "Vibrance": "", "View": "", "Vignette": "", "ViliusL": "", "Vintage": "", "Webcam": "", "Webcam #": "", "Website:": "", "Weppy File Format": "", "Width (%):": "", "Width:": "", "Windows Bitmap": "", "Word": "", "Word + Letter": "", "Wrap At:": "", "Wrap:": "", "Wrong dimensions": "", "Wrong file type, must be image or json.": "", "X end:": "", "X position:": "", "X start:": "", "X-Pro II": "", "Y end:": "", "Y position:": "", "Y start:": "", "You can also drag and drop items into browser.": "", "Your browser does not support canvas or JavaScript is not enabled.": "", "Your browser does not support this format.": "", "Your search did not match any images.": "", "Zoom": "", "Zoom Blur": "", "Zoom In": "", "Zoom Out": "", "Zoom blur": "", "Zoom in": "", "Zoom out": "", "Zoom:": "" }
actually
kinda self-centered of me to put my name there lol, I didn't just make this engine. I love all the contributors <3
me fucking around on a second google webpage because i have trouble semantisicing shit and shit
Merge branch 'experimental' of github.com:MarechJ/hll_rcon_tool into experimental # minor
New features:
-
Historical scoreboard: At the end of every game, a scoreboard is computed from the logs with the time boundries of that game (that's what the 'routines' and 'workers' services do) You can select any of the last 50 games from the UI. Even though the complete history is available. I'll try to add a pagination / selection later on. https://prnt.sc/13014xg 2 new stats available, longest life and shortest life. For the longest life if the dude doesn't die at all during the game the longest life will show 0 though and the shortest will show the total game time, but that's a corner case https://prnt.sc/1301bb7
-
Scoreboards: You can now search for one or more players, only those will be displayed, their rank is preserved If you click on a dude in the scoreboard it will show his details at the top of the screen, the breakdown of his kills by weapons and by players, as well as the breakdown of his death https://prnt.sc/1302hda
-
Live scoreboard: Reworked the UI to be cleaner https://prnt.sc/1302dj7 Optimized loading time (it was actually waiting for the 1st auto refresh at load, the 10sec waiting for nothing)
-
Player profile screen thanks to @cha : Click on the steamId from the Live or History view https://prnt.sc/130383r to land on the profile page (ctrl+click for new window) https://prnt.sc/1303ahy New page to view the player profile. We'll add more stuff in there soon like the game sessions and some metadata on the names. You can now add comments to player, either directly from the profile or from actions and group actions https://prnt.sc/1302qyl https://prnt.sc/1302pl9
-
Historical logs thanks to @cha : The CSV export has been re-worked to contain more human friendly data
Improvements / bug fixes:
- Added server number to player's session records
- Fix bug where the steam profile would sometime be overriden with an empty one due to steam being unavailable
- Fixed the auto-votekick toggle that was being ignored, now if you don't want this feature just turn it off, it will work for real
- The auto-ban on TK after connection now falls back to blacklist if the guilty guy escaped before the code had the chance to ban him
- Small UI optimizations
Notes:
- Investigated the bug that prevents from banning people with a number only name, couldn't find a solution, this actually seems to be a bug on the game server side.
I advise that you enable the NAME_KICKS feature in your config/config.yml (see config.default.yml) with the following rule, to auto kick those:
NAME_KICKS:
regexps:
- ^\d+$