3,312,558 events, 1,522,501 push events, 2,539,726 commit messages, 187,704,484 characters
Holy fuck that's a lot of shit that doesn't work. In any case, here we are.
grep: fix bugs in handling multi-line look-around
This commit hacks in a bug fix for handling look-around across multiple lines. The main problem is that by the time the matching lines are sent to the printer, the surrounding context---which some look-behind or look-ahead might have matched---could have been dropped if it wasn't part of the set of matching lines. Therefore, when the printer re-runs the regex engine in some cases (to do replacements, color matches, etc etc), it won't be guaranteed to see the same matches that the searcher found.
Overall, this is a giant clusterfuck and suggests that the way I divided the abstraction boundary between the printer and the searcher is just wrong. It's likely that the searcher needs to handle more of the work of matching and pass that info on to the printer. The tricky part is that this additional work isn't always needed. Ultimately, this means a serious re-design of the interface between searching and printing. Sigh.
The way this fix works is to smuggle the underlying buffer used by the searcher through into the printer. Since these bugs only impact multi-line search (otherwise, searches are only limited to matches across a single line), and since multi-line search always requires having the entire file contents in a single contiguous slice (memory mapped or on the heap), it follows that the buffer we pass through when we need it is, in fact, the entire haystack. So this commit refactors the printer's regex searching to use that buffer instead of the intended bundle of bytes containing just the relevant matching portions of that same buffer.
There is one last little hiccup: PCRE2 doesn't seem to have a way to specify an ending position for a search. So when we re-run the search to find matches, we can't say, "but don't search past here." Since the buffer is likely to contain the entire file, we really cannot do anything here other than specify a fixed upper bound on the number of bytes to search. So if look-ahead goes more than N bytes beyond the match, this code will break by simply being unable to find the match. In practice, this is probably pretty rare. I believe that if we did a better fix for this bug by fixing the interfaces, then we'd probably try to have PCRE2 find the pertinent matches up front so that it never needs to re-discover them.
Fixes #1412
Heres a big change, right?
Defyion wasn't bad, we used it thinking about the security and hacking vibes watching MR ROBOT lmao and really Nêmesis wasn't only my most used nickname but the figure of a justice goddess and delivering that justice through a divine retribution is way more my profile.
This may be the only time i talk about that, and somewhere in the future when i become famous, some nerd, fan, hatter gonna find this and make something funny.
Th1 : is a reference to either "This one" or "The one", is like just flexing about something you're not yet, but what it means is that you just believe in it so much that you know that at some point this will eventually come become true.
I love being a kid, I have my whole life ahead of me!
Fix (TCollection *) to (TSortedCollection *) cast in TSortedListBox::list()
God damn. Because TCollection and TSortedCollection are no more than forward declarations by the time TSortedListBox::list() is defined, the cast is implemented as a reinterpret_cast. As a consequence, invoking TSortedListBox::list() would provide the wrong result.
Amazingly, Borland C++ handles this fine, so I have been hugely confused while debugging this.
This is my first experience where C-style casts have silenced a compilation error.
Fuck Qt
It is bullshit and hard to build, let's use SDL instead. That one works easily as you can see with my code. That code, by the way, doesn't compile yet, because of unimplemented compiler features.
sched/core: Implement new approach to scale select_idle_cpu()
Hackbench recently suffered a bunch of pain, first by commit:
4c77b18cf8b7 ("sched/fair: Make select_idle_cpu() more aggressive")
and then by commit:
c743f0a5c50f ("sched/fair, cpumask: Export for_each_cpu_wrap()")
which fixed a bug in the initial for_each_cpu_wrap() implementation that made select_idle_cpu() even more expensive. The bug was that it would skip over CPUs when bits were consequtive in the bitmask.
This however gave me an idea to fix select_idle_cpu(); where the old scheme was a cliff-edge throttle on idle scanning, this introduces a more gradual approach. Instead of stopping to scan entirely, we limit how many CPUs we scan.
Initial benchmarks show that it mostly recovers hackbench while not hurting anything else, except Mason's schbench, but not as bad as the old thing.
It also appears to recover the tbench high-end, which also suffered like hackbench.
Tested-by: Matt Fleming [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Cc: Chris Mason [email protected] Cc: Linus Torvalds [email protected] Cc: Mike Galbraith [email protected] Cc: Peter Zijlstra [email protected] Cc: Thomas Gleixner [email protected] Cc: [email protected] Cc: kitsunyan [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar [email protected] (cherry picked from commit 1ad3aaf3fcd2444406628a19a9b9e0922b95e2d4)
Conflicts: kernel/sched/fair.c
Winners Quote App simply contains a collection of Winner's quotes , Inspirational messages, Bible messages, life and Success quotes, Anniversaries, Millionaires messages, Love messages, Birthday wishes, Funny quotes, Fitness quotes and also powerful African proverbs.
Kindly check it out & share 🔥🔥🔥🔥🔥🔥🔥🔥 PlayStore: https://play.google.com/store/apps/details?id=com.globetechconsultldt.winners_vibe 🔥🔥🔥🔥🔥🔥🔥🔥🔥 Overview of Winners Quotes App: https://youtu.be/aKccgBQLeS0
Genesis 1;5
God called the light “day,” and the darkness he called “night.” And there was evening, and there was morning—the first day.
Another run-on
I get to say * b/c any of you shame a brother going into fatherhood, just know that we're watching. Different times. Different people. Different world. Be wise and do your own research but know that after wards you laugh about it and hope the universe took a note. Future young person listening to today's rap. HA! Don't even trip hommie you don't even phase me but make sure you can keep enjoying that * into your late years. Bumping the true crew love bump, "you ain't even know it!"
"1:50pm.
def grad_state_probs(): # Prediction errors modulate the state probabilities.
prediction_values = (head_weighted_values / head_value_weights)[action_indices,:] if head_weighted_values.shape[0] <= action_indices.shape[0] else head_weighted_values[action_indices,:] / head_value_weights[action_indices,:] # [batch_dim,state_dim]
prediction_errors = torch.abs((action_values - prediction_values) * action_weights) # [batch_dim,state_dim]
Forgot to multiply by the action weights.
2:30pm. Let me finally resume. Where was I?
def of_action_probs(action_probs, sample_probs):
# action_probs[batch_dim,action_dim]
# sample_probs[batch_dim,action_dim]
qwe = state_probs.mm(values.t()) # [batch_dim,action_dim]
What should I name this?
3pm.
import torch
import torch.distributions
from torch.functional import Tensor
def updates(state_probs : Tensor,head : Tensor,action_indices : Tensor,at_action_value : Tensor,at_action_weights : Tensor):
# state_probs[batch_dim,state_dim]
# head[action_dim*2,state_dim]
# action_indices[batch_dim] : map (action_dim -> batch_dim)
# at_action_value[batch_dim,1] : map (action_dim -> batch_dim)
# at_action_weight[batch_dim,1] : map (action_dim -> batch_dim)
num_actions = head.shape[0]//2
head_weighted_values = head[:num_actions,:] # [action_dim,state_dim]
head_value_weights = head[num_actions:,:] # [action_dim,state_dim]
def update_head(): # Weighted moving average update. Works well with CFR's reweighting.
state_weights = at_action_weights * state_probs # [batch_dim,state_dim]
head_weighted_values[action_indices,:] += at_action_value * state_weights
head_value_weights[action_indices,:] += state_weights
def grads():
values = head_weighted_values / head_value_weights # [action_dim,state_dim]
def of_state_probs(): # Prediction errors modulate the state probabilities. The cool part is the centering.
prediction_values_for_state = values[action_indices,:] # [batch_dim,state_dim]
prediction_errors = torch.abs(at_action_value - prediction_values_for_state) # [batch_dim,state_dim]
prediction_error_mean = (state_probs * prediction_errors).sum(-1,keepdim=True) # [batch_dim,1]
at_action_weights * (prediction_errors - prediction_error_mean) # [batch_dim,state_dim]
def of_action_probs(action_probs, sample_probs): # Implements the VR MC-CFR update.
# action_probs[batch_dim,action_dim]
# sample_probs[batch_dim,action_dim]
prediction_values_for_action = state_probs.mm(values.t()) # [batch_dim,action_dim]
at_action_sample_probs = torch.gather(sample_probs,-1,action_indices.unsqueeze(-1)) # [batch_dim,1]
at_action_prediction_value = torch.gather(prediction_values_for_action,-1,action_indices.unsqueeze(-1)) # [batch_dim,1]
at_action_prediction_adjustmnet = (at_action_value - at_action_prediction_value) / at_action_sample_probs # [batch_dim,1]
torch.scatter_add(prediction_values_for_action,-1,action_indices.unsqueeze(-1),at_action_prediction_adjustmnet)
-at_action_weights * action_probs * prediction_values_for_action # [batch_dim,action_dim]
return of_state_probs, of_action_probs
return update_head, grads
Here it is.
3:10pm. Doing some thinking whether adjusting the scale of the centered gradients to the orignal makes sense, and the conclussion is that it does not.
[9,8] * [1.0,0.0]
This would center it to [0,-1]
. I do not want that to become [0,-9]
. I mean, it could, but then what should I do with an input like [9,3] * [1.0,0.0]
which would otherwise be [0,-6]
. Do not want those to have the same grads. The rule I picked is the best one.
3:20pm. It might be good to match the scale of the log_softmax rule, but forget it. I won't bother with this. What I have now should be good enough.
What I've implemented here is the most straightforward thing imaginable, but it ticks all the boxes:
- Tabular methods for learning values.
- Bounded gradient updates for learning policies.
Since I am doing tabular methods on top of aggregated features, I won't have to worry about it having trouble learning the values. Since I am using weighted averaging, I do not even need a learning rate. I'll just decay the head by some factor after every policy update.
3:30pm. I am just chilling for a bit.
How about go back to watching the lecture by Bertsekas?
I do not feel like doing more programming right now. The next thing on the list should be the optimizer.
4:10pm. https://www.youtube.com/watch?v=MP-0PlYLWu0 Feature Based Aggregation and Deep Reinforcement Learning
Let me watch this for a bit.
4:30pm. https://youtu.be/MP-0PlYLWu0?t=3129
I am not getting much out of this, but he is talking about DeepChess which I haven't heard about. It extracts features using a NN.
Back when I read the episodic memory papers, my impression was that none of them use the tabular prediction error to train the layer below and use random projections. I wonder how DeepChess does it.
I am not going to look into this. Let me finish this, and then I will take a look at how the SGD optimizer is done. I'll write my own.
https://youtu.be/MP-0PlYLWu0?t=3270
Interesting that he mentions that recognizing success or failure is a problem. Yeah, this is something that is troubling me as well.
4:40pm. A memory is coming to my mind now that he is talking about tetris. Some OpenAI guy talked about the cross entropy method and how it crushed RL algos on tetris. Now that I've managed to get to the point where tabular RL is usable with deep nets, it might be worth looking into the CEM. I never got the algorithm for it. Maybe there is a video on it somewhere.
I know I studied this, but I never came close to understanding it.
https://www.youtube.com/watch?v=0h8Ql5-CpBE Reinforcement Learning: Cross Entropy Method
Oh, lol. I had to check that I did not mute the sound by accident. This has no audio.
5pm. https://youtu.be/dmH1ZpcROMk?t=12 Reward Is Enough (Machine Learning Research Paper Explained)
Let me watch this for a bit.
5:05pm. Nevermind. I do no feel like listening to this, I am already checking out the SGD optimizer.
5:25pm.
y = torch.nn.Sequential(torch.nn.Linear(4,6),torch.nn.Linear(6,2))
print(y[0].bias)
list(y.parameters())
Ok, decision time. I meant to grab the infinity norm of the bias together with the weights, but that won't be needed. I'll do a linear layer that makes the bias and the weight views on the same tensor if needed later. But it is not a huge deal.
To start things off, let me grab signSGD off the net somewhere. The SGD optimizer is actually rather complex.
y = torch.nn.Sequential(torch.nn.Linear(4,6),torch.nn.Linear(6,2))
for x in y:
print(x)
Oh, I can access all the params like this.
https://discuss.pytorch.org/t/build-custom-param-groups-for-optimizer/43784/3
And this thread shows how to use param groups. Good. That means I do not have to do a whole separate layer with packed weights just to take the inf norm over the layer.
I can group the params in a Seq.
That is one thing done.
5:55pm. Let me take a break here.
6:10pm. Let me just test a few more things.
y = torch.nn.Sequential(torch.nn.Linear(4,6),torch.nn.Linear(6,2))
list(y)
This is nice. By doing this I get a list of individual modules. I can use this to group them.
The update I can do easily. I do not need a special optim object.
6:20pm. Hmmm, the default optimizer is actually pretty complex. I do not want to deal with it right now. Since my own requires nothing more than gradients, and does not keep around momentum, a function will suffice.
I should look into how optimizers work more, but it is not important right now. I'll just implement the functions that I need and move on with my life.
6:25pm. Tomorrow, I'll do the optimizer. It is not a big deal. I really got into it when doing the updates. It is a nice piece of work.
As I said, despite getting up at 7am, I did not do much today. But keeping the morale high is more important. If one tries to do too much, it is easy to get burnt out. Spending so much time daydreaming about the things I am doing might seem like a waste, but it is what keep me going.
Tomorrow, I will do the optimizer and after that I'll be checking out how to do the training loop in Python after that. I'll leave what is needed in Spiral to where it is, and do the flaky NN optimization parts that I'll have to often debug in Python. That is the way to the stress free programming life.
It won't be long before I have this scheme running and those tough-to-chew self play players optimizing."
Just learning that the tweet is the laugh you make as you write it, not what you just dropped doing to do it
Man, I swear it just looks like all I did was just live my childhood in a 83 days. OMG SO MUCH CRINGE CHEESE ADORABLE!!!! Yo, I worked my butt off to be this cool so I wouldn't have to decide between friends and a ball game. okay, that's Big!
update
add blood, tears, and samurai love remove ullman update about update copyright on aforementioned pages
Remove unnecessary references to slavery
The horrible effects of human slavery continue to impact society. The casual use of the term "slave" in computer software is an unnecessary reference to a painful human experience.
This commit removes all possible references to the term "slave".
Implementation notes:
The zpool.d/slaves script is renamed to dm-deps, which uses the same
terminology as dmsetup deps
.
References to the /sys/class/block/$dev/slaves
directory remain. This
directory name is determined by the Linux kernel. Although
dmsetup deps
provides the same information, it unfortunately requires
elevated privileges, whereas the /sys/...
directory is world-readable.
Reviewed-by: Brian Behlendorf [email protected] Reviewed-by: Ryan Moeller [email protected] Signed-off-by: Matthew Ahrens [email protected] Closes #10435
Update FDIMPLES.DE
Second attempt at changing the Google Translamangled German translations. :) Sorry. There are a couple of points which didn't at all make sense in German. Line 36: long since established: the correct translation for 'cancel' is 'Abbrechen' not 'Stornieren' (even though that'd be a perfectly valid translation otherwise; context, context, context) Line 55 for example. I have no idea how to properly do that. But this is my try. Line 78 is important! "Herausgeber" is a correct translation for 'editor', except it was the wrong kind of editor. The software called 'editor' ist still just "Editor" in German. (a "Herausgeber" is the editor you send letters to ... as in 'letters to the editor')
Upper an lower case is also all over the place and needed correction.
I also corrected all the German umlauts but that may have been wrong depending on how this text file is being translated to codepage 850 or 858 respectively. So please be aware of the German umlauts! Do some testing by installing the German version into a VM. Thank you.
I'm still not sure whether that's a perfect solution, but at least it's an improvement. Please don't use Google Translate. Google Translate is nice to help read a web-site but it's not suitable to actually translate text that goes into a product. That's my opinion anyway. Thank you.
v 0.33.0
- Dinner Kitchen > Stealth Check, higher level (feminization path)
- Night pussy scene edited to show up in "she is sleeping prone", since it is written as if she was
- Estrogen pills now show up in the inventory when you buy and get them
- The Market clearly states that users can find the stuff they buy at Pre Lunch, in the Living Room, checking the mail
- Trade Cumrag Panties for money or Good Boy Points in the Master Bedroom, First Afternoon
- Hand her Clean or Dirty panties after she has smeared cum all over her face during Netfux & Chill (using her hand)
- After giving the panties to $landlady (Master bedroom, First Afternoon), IF she is already ok with daily checks about vaginal checks, you can perform a "Taste test" (ranked poll, 2k words)
- Fixed lack of speech bubbles in some chapters
add --defrag to url
for parsing query-strings-in-hash-frags
SPA love to do this shit
new preset!!! yooooo!
Hi I haven't done this in a while! I'm sorry about that I tried to upload one before but I realised I accidentally make a fork of the presets and- yeah this one might be a little messy, if someone could wiggle into my DMs (Faeyth#0001) cuz I can't for the life of me remember how to clean them up myself, also are credits changes okay? I've been trying to be more open with myself and I'd like to link proper social media to my credits instead of my email cuz I was scared of human interaction.
Thanks much!!!
She appears to have tossed the hamburger outside t
he automobile
readme.md alphabetizes because fuck you my ocd demands it (i dont actually have ocd please don't cancel me i just don't like unalphabetized lists) Deletes IAA tips (IAA can't even trigger on dynamic!!!! I'll re-add them when IAA happens!!)
Refactor intake forms to add Yup validations (#16291)
Resolves N/A (refactor) - Sorry for the size, since all the review pages share so much, to make this work it sorta needed to be an all or nothing switch!
The review page for all the intake forms were previously all managed with Redux. This prevented us from using the yupResolver with useForm in order to manage the front end validations. The front end validations were previously custom logic validations that attempted to mimic the backend. Those can be removed in a subsequent PR so this one doesn't continue to grow.
Major things to look at are:
- Individual review pages now define their own yup schema and export them from their own file. I prefer this way to keep the validation logic physically close to the specific form definition itself.
- The parent review.jsx now imports the sub forms schemas and uses them since the parent form element is defined here, and uses the imported schema for validating the form on submit.
Things to note:
- The review page diff looks pretty ugly but there's not that many changes there logically. I just updated a few of the class based components to be functional and moved some of the logic that was previously in the submit button component into the Review component itself. Design wise this makes more sense (to me) and for yup validations to work properly it was cleaner to keep the submission logic with the form wrapper itself.
Next steps
- Remove custom validation functionality from the submit function in DecisionReview.jsx (left out for size considerations)
Future enhancements
- Can now easily tweak the yup schemas for the various intake types, so could add AMA date validation to each of the required forms
- I'd like to look at removed redux all together from the review pages, since we can manage the form values using React's useForm. This would greatly simplify a lot of these form pages.
- Add additional schema validations to the RAMP forms
- Add yup to all of the other forms around the intake process.
- Code compiles correctly
- Walk through the intake process with each of the 5 forms. Try to submit with various fields missing and ensure that a network request is not made and input errors still occur.
- Walk through the appeal intake, and enter a receipt date that is prior to the AMA date and ensure you get the appropriate error
- Complete an intake of each of the 5 forms to ensure entire process is still functional.
- For higher-risk changes: Deploy the custom branch to UAT to test
BEFORE|AFTER No visual changes, front end validation is just now triggered using Yup instead of custom validations.
- Add or update code comments at the top of the class, module, and/or component.
Honestly should have commit more often because the amount of changes to the website are quite drastic. Luckily the website is still pretty basic and not really a "for-production" codebase. In summary, I've been trying to update the site to be a fully fledged home page for Atheos and provide everything I feel users of Atheos would deserver. I've turned the website into more of a PHP router with markdown files and some probably stupid complex PHP action in the background. Hopefully this is a good direction, if not, I just wasted my time; no big deal. It's not a hard site to code for or make changes to. I'm sorry about not committing more often for each individual change.