Skip to content

Latest commit

 

History

History
346 lines (251 loc) · 23 KB

2021-10-17.md

File metadata and controls

346 lines (251 loc) · 23 KB

< 2021-10-17 >

1,369,876 events, 846,532 push events, 1,140,261 commit messages, 65,475,822 characters

Sunday 2021-10-17 01:47:01 by Fletcher Dunn

Rewrote reliable segment bookkeeping to work with multiple lanes.

Previously we were keeping the reliable segments that had been sent in two non-overlapping std::maps: in-flight, and ready-for-retry.

Also, each in-flight packet would remember the reliable segments it contained as begin/end pairs. When a packet was acked or nacked, we would need to lookup the range in the maps.

The new method puts all segments that have been sent in a linked list, and we track the status of each segment. The in-flight packets hold a reference to them, so there is no more need to lookup. Likewise, the retry list is just a list of referneces into that list.

This is much more memory efficient:

  • The in-flight references are each one short, instead of a 64-bit begin/end pair.
  • And we use CUtlLinkedList, which stores the data in a contiguous array, instead of std::map, which makes tons of tiny allocations.

I went in a cirlce on the design for the data structures for this at least 4 or 5 times, probably spending 10+ hours on it until I finally got something I didn't totally hate. There was always a bubble under the carpet that I could move around but not elimiate. In this design, the bubble is one place where we do do some linear searching: When we need to retry a segment, we have to find the proper place to insert it into the retry queue. This is because reliable serialization assumes that the segments will be presented in order of increasing stream position for a given lane. However, at least this list should usually be short.

In the typical case of relatively little packet loss and need for reliable retry, this code is much, MUCH more efficient than the previous code with memory (and probably CPU -- although I haven't measured it!). I believe that to fix the remaining bubble under the carpet I need to change the reliable retry list to be a map, ordered by some segment serial number. But I don't want to do that with a std::map. So I think I might bring back Valve's ordered map CUtlMap, which uses integers (shorts if you want) as stable handles to the elements, and uses contiguous memory allocation. That would be good for this case, and also optimize a few of the other maps as well. But I think this code is good enough for now.

Also: added per-lane stats for number of bytes pending in various states.

P4:6834549


Sunday 2021-10-17 01:57:43 by yaasin-raki2

kinda did smtng hh : tried some experiments using semaphores since they were the most logical approach for this prob (at least for me). anyway I realized that I have to build it first using mutexes so I commented out the fucked up semaphores approache and tried building a mutexes one, while triying to do so I encountred a strange problem which till now I don't know how it came to be a thing lol, the problem is that when u create threads inside a loop and u pass the index of the loop as an argument to the thread routine and tries to print it u will see that some threads print the same index which means some threads got arguments of other threads, which is not even close to be logical, stupid ass computers -_- . anyway I manages to solve this weird problem by levreging an array of indexes of the loop or u can say an array of arguments for routines, where I assign each fielf in it to an arg and the pass the &arr[i] to the routine, and i don't fucking know why this works and the first approach didn't. later on I created another philo struct which holds threads id and the philo's id which is the loop index, and then an array of philos. I organized options also in an ops struct and initialized it using an external function, then I added a field to the philo struct which is the ops struct, cuz I wanna send the ops too to the threads routines besides the philo struct. and finally I tried solving a sub problem of the whole big one, which is 2 philosophers (or more) having one fork, I realized then that each fork can be represented as a mutex, when a philo start thinking if he wanna eat and someone else is eating (aka using the fork) this philo has to wait until the other one ends up eating and entering the sleep period. so now all I have to do is to create an array of mutexes whom size is np (number_of_philosophers) and implement the death logic , and I haven't forgot about the last optional parameter, I'm gonna add it with it's logic after I finish the problem only using the 4 mandatory params for simplicity sake


Sunday 2021-10-17 02:55:54 by LemonGraabTheForsaken

Added Cave Entrance Obstacle

Added an obstacle prefab for a really stupid and weird looking cave entrance. I hate my life.


Sunday 2021-10-17 04:10:30 by Tanner

Animated GIFs: overhaul frame optimizer for improved performance and quality

GIF is such a stupid format and it pains me to target this kind of legacy support, but let's face it - GIFs will (always?) be the format most users go-to for animation, despite there being far superior modern options. Habits are tough to break.

So it is with much dismay that I've just put a lot of work into a high-quality animated GIF optimizer. Throwing random GIFs from online at the encoder shows that we are within a percent or two of gifsicle (https://www.lcdf.org/gifsicle/) on images dominated by the global color table, we consistently beat gifsicle on images dominated by local color tables, and we pretty much universally improve GIFs originating from any other encoder. Not too shabby, if I say so myself!

Best of all, GIF optimizations are all automatic. They require no input from the user and PD doesn't need to retain knowledge of the GIF's original encoding behavior. You can even throw non-GIF sources at it (animated PNG, WebP) and the encoder will go ahead and optimize them for you.

GIF's extreme limitations make it sort of a fascinating target for encoding heuristics, since there's such a limited problem space compared to e.g. PNG (which has 50x more dials and knobs to turn when encoding). With GIFs, you're basically limited to palette tricks, inter-frame tricks (with some transparency capabilities), and LZW tweaks. These are fairly straightforward areas to target, and all have been addressed to some capacity here. (Note that my LZW targeting is not as aggressive as e.g. flexigif - https://create.stephan-brumme.com/flexigif-lossless-gif-lzw-optimization/ - but PD can't take hours to write a GIF, so I had to limit aggressiveness.)

Speaking of LZW, I've also dropped FreeImage as PD's GIF encoder, but I'll probably commit that separately so this commit comment doesn't grow any longer...


Sunday 2021-10-17 04:10:30 by Tanner

GIF encoder: switch to pure VB6 LZW encoding solution

Copied from the comments of PD's new LZW encoder:

In 2021, I spent some time optimizing PhotoDemon's animated GIF exporter. Many optimizations occur before actual GIF encoding (frame- and palette-related stuff), but for final GIF encoding I leaned on the 3rd-party FreeImage library. Unfortunately, FreeImage's GIF support is mediocre (palettes are always written as 256-color tables, encoding is built atop strings (!!!) so perf is rough), so I started hunting for alternative solutions.

After getting angry at giflib and the incredibly unpleasant shitshow that is compiling it as a Windows DLL, I stumbled across a VB6 LZW encoder at the planet-source-code archive on GitHub:

https://github.com/Planet-Source-Code/carles-p-v-image-8-bpp-ditherer-native-gif-encoder__1-45899

Carles 's project is a modified version of a VB6 LZW implementation by Ron van Tilburg (whose original project has been lost to time, alas). Per the comments in Carles's code, Ron translated his initial version from C sources derived from the original UNIX compress.c. It was a fun trip down memory lane to see some familiar PSC authors, but as always, there's a critical problem with VB6 code from public archives like this - the likelihood of the code being buggy and/or unreliable is extremely high.

Fortunately, the original C sources for compress.c are still available, as are translations into myriad other languages - for example, a few seconds on GitHub turned up these: https://github.com/mlapadula/GifEncoder/blob/master/lib/src/main/java/com/mlapadula/gifencoder/LzwEncoder.java https://github.com/P1ayer4312/Steam-Artwork-Cropper/blob/main/steam/js/gif.js-master/src/LZWEncoder.js

Armed with multiple references to compare and contrast, I set about mixing and matching the original C code with some ideas from Carles/Ron's VB6 versions to produce something appropriate for PhotoDemon. I think the final result is very good, with greatly improved performance over previous VB6 translations. Some edge-case bugs have been fixed, a lot of suboptimal-for-VB6 decisions have been reworked, and LZW encoding efficiency has been improved. The final result is a very compact LZW encoder with efficiency and performance on par with giflib.

Note that this module only handles the LZW encoding portion of GIF export. All the actual file encoding (including headers, tables, etc) is 100% my own work and remains in the ImageFormats_GIF module under the same BSD license as PD itself. I've deliberately moved this LZW code into its own file because the 40-year copyright list is quite long, and licensing is murky because while compress.c was released into the public domain, the licensing state of subsequent translations is unclear. For commercial usage you may want to contact some of the authors listed below for clarity, but please consider any modifications I've made to be freely licensed in the public domain under the unlicense (https://unlicense.org/).

Anyway , here's my attempt to merge various credit lists from the sources mentioned above into one comprehensive list:

GIF Image compression based on: compress.c (File compression ala IEEE Computer, June 1984.) Original Authors: Spencer w.thomas(decvax!harpo!utah - cs!utah - gR!thomas) jim McKie(decvax!mcvax!jim) Steve Davies(decvax!vax135!petsd!peora!srd) ken Turkowski(decvax!decwrl!turtlevax!ken) James a.Woods(decvax!ihnp4!ames!jaw) joe Orost(decvax!vax135!petsd!joe) David Rowley ([email protected]) Initial VB6 translation by: Ron van Tilburg ([email protected]) ...with additional VB6-specific modifications by Carles P.V. and Tanner Helland (tannerhelland.com)


Sunday 2021-10-17 12:47:41 by AidasV1

Gentlemen, a short view back to the past. Thirty years ago, Niki Lauda told us: “Take a trained monkey, place him into the cockpit and he is able to drive the car.” Thirty years later, Sebastian told us: “I had to start my car like a computer. It’s very complicated.” And Nico Rosberg said, err, he pressed during the race, I don’t remember what race, the wrong button on the wheel. Question for you two both. Is Formula 1 driving today too complicated with 20 and more buttons on the wheel, are you too much under effort, under pressure? What are your wishes for the future, concerning technical program, errrm, during the race? Less buttons, more? Or less and more communication with your engineers.


Sunday 2021-10-17 15:48:30 by EpikHriday

the stupidest commit I can do! Can't tolrate this shit anymore! It's fucking depressing!


Sunday 2021-10-17 16:02:25 by Nimit96

MALIGNANT COMMENTS CLASSIFICATION

Problem Statement The proliferation of social media enables people to express their opinions widely online. However, at the same time, this has resulted in the emergence of conflict and hate, making online environments uninviting for users. Although researchers have found that hate is a problem across multiple platforms, there is a lack of models for online hate detection. Online hate, described as abusive language, aggression, cyberbullying, hatefulness and many others has been identified as a major threat on online social media platforms. Social media platforms are the most prominent grounds for such toxic behaviour.
There has been a remarkable increase in the cases of cyberbullying and trolls on various social media platforms. Many celebrities and influences are facing backlashes from people and have to come across hateful and offensive comments. This can take a toll on anyone and affect them mentally leading to depression, mental illness, self-hatred and suicidal thoughts.
Internet comments are bastions of hatred and vitriol. While online anonymity has provided a new outlet for aggression and hate speech, machine learning can be used to fight it. The problem we sought to solve was the tagging of internet comments that are aggressive towards other users. This means that insults to third parties such as celebrities will be tagged as unoffensive, but “u are an idiot” is clearly offensive. Our goal is to build a prototype of online hate and abuse comment classifier which can used to classify hate and offensive comments so that it can be controlled and restricted from spreading hatred and cyberbullying. Data Set Description The data set contains the training set, which has approximately 1,59,000 samples and the test set which contains nearly 1,53,000 samples. All the data samples contain 8 fields which includes ‘Id’, ‘Comments’, ‘Malignant’, ‘Highly malignant’, ‘Rude’, ‘Threat’, ‘Abuse’ and ‘Loathe’. The label can be either 0 or 1, where 0 denotes a NO while 1 denotes a YES. There are various comments which have multiple labels. The first attribute is a unique ID associated with each comment.
The data set includes:

  • Malignant: It is the Label column, which includes values 0 and 1, denoting if the comment is malignant or not.
  • Highly Malignant: It denotes comments that are highly malignant and hurtful.
  • Rude: It denotes comments that are very rude and offensive.
  • Threat: It contains indication of the comments that are giving any threat to someone.
  • Abuse: It is for comments that are abusive in nature.
  • Loathe: It describes the comments which are hateful and loathing in nature.
  • ID: It includes unique Ids associated with each comment text given.
  • Comment text: This column contains the comments extracted from various social media platforms. This project is more about exploration, feature engineering and classification that can be done on this data. Since the data set is huge and includes many categories of comments, we can do good amount of data exploration and derive some interesting features using the comments text column available. You need to build a model that can differentiate between comments and its categories.

Sunday 2021-10-17 20:44:34 by Neyrision

did a lot of the basic stuff so I can feel less lazy fuck me dude god life is pain but I do have the certain boy so maybe it's not that much of a pain teehee jk still sucks pretty hard aaaa


Sunday 2021-10-17 21:06:15 by yeayea130

Fixes hound double-init bugs and enforces sci-hound module parity. (#3522)

  • Fixes hound double-init bugs and enforces sci-hound module parity.

About The Pull Request

  1. Fixes the double init bug that affected the Sci-hound. engineering hound, ERT borg. and service hound.
  2. Enforces module parity between the sci-hound and science drone modules. ensuring they are both capable of the same work- and that the sci-hound doesn't have half of the modules required for xenobiology.
  3. removes duplicate dogborg jaws and nonfunctional duplicate cable-coil from the sci-hound module
  4. adds the portable destructive analyzer from the sci-drone to the sci-hound so interacting with vore content is optional to do your job

Why It's Good For The Game

  1. Fixes a double init bug which causes errors in the server console.
  2. Ensures that both science modules can do the same things while removing duplicate and nonfunctional modules from the one that had it.
  3. please god let me have the normal analyzer i'm tired of having to eat all the fucking things I don't want to do it

Changelog

🆑 fix: removes ..() from the end of the science, engineering, ERT, and Service hound modules to ensure they do not double-init fix: removes duplicate and nonfunctional modules from the sci-hound cyborg add: adds modules in-line with the normal science drone add: adds the portable destructive analyzer to the sci-hound, /🆑 This is my first PR. please tell me if I've done anything wrong and what to change about it.

  • whoops forgot to smash that mf enter button

  • forgot a syringe for xenobio


Sunday 2021-10-17 21:39:20 by Peter Zijlstra

sched/core: Fix ttwu() race

Paul reported rcutorture occasionally hitting a NULL deref:

sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM

Debugging showed that this only appears to happen when we take the new code-path from commit:

2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")

and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:

c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")

this would've unconditionally hit:

smp_cond_load_acquire(&p->on_cpu, !VAL);

and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.

The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.

Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.

The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):

				X->cpu = 1
				rq(1)->curr = X

CPU0				CPU1				CPU2

				// switch away from X
				LOCK rq(1)->lock
				smp_mb__after_spinlock
				dequeue_task(X)
				  X->on_rq = 9
				switch_to(Z)
				  X->on_cpu = 0
				UNLOCK rq(1)->lock

								// migrate X to cpu 0
								LOCK rq(1)->lock
								dequeue_task(X)
								set_task_cpu(X, 0)
								  X->cpu = 0
								UNLOCK rq(1)->lock

								LOCK rq(0)->lock
								enqueue_task(X)
								  X->on_rq = 1
								UNLOCK rq(0)->lock

// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
  X->on_cpu = 1
UNLOCK rq(0)->lock

// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb();			// wake X
				ttwu()
				  LOCK X->pi_lock
				  smp_mb__after_spinlock

				  if (p->state)

				  cpu = X->cpu; // =? 1

				  smp_rmb()

// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
  X->on_rq = 0

				  if (p->on_rq)

				  smp_rmb();

				  if (p->on_cpu && ttwu_queue_wakelist(..)) [*]

				  smp_cond_load_acquire(&p->on_cpu, !VAL)

				  cpu = select_task_rq(X, X->wake_cpu, ...)
				  if (X->cpu != cpu)
switch_to(Y)
  X->on_cpu = 0
UNLOCK rq(0)->lock

However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.

(Most of the previous ttwu() races were found on very large PowerPC)

Nevertheless, this fully explains the observed failure case.

Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Tashfin Shakeer Rhythm [email protected]


Sunday 2021-10-17 22:19:47 by ms-mirror-bot

[MIRROR] Removes dead code in techwebs, alongside some truly evil macros (#980)

  • Removes dead code in techwebs, alongside some truly evil macros (#61936)

fuck you kevin

  • Removes dead code in techwebs, alongside some truly evil macros

Co-authored-by: LemonInTheDark [email protected]


< 2021-10-17 >