Skip to content

Latest commit

 

History

History
809 lines (606 loc) · 39.7 KB

2021-09-14.md

File metadata and controls

809 lines (606 loc) · 39.7 KB

< 2021-09-14 >

3,252,562 events, 1,659,834 push events, 2,533,087 commit messages, 207,207,507 characters

Tuesday 2021-09-14 00:11:25 by Suhaniallen

Create this is a cheat JK YOU JUST GOT HACKED YOU LOST $1000 MUH HA HA HA HA HA

DO NOT TRUST ARJUN


Tuesday 2021-09-14 00:12:07 by Suhaniallen

Merge pull request #1 from Suhaniallen/I-AM-A-HACKER-MUH-HA-HA-HA

Create this is a cheat JK YOU JUST GOT HACKED YOU LOST $1000 MUH HA H…


Tuesday 2021-09-14 01:50:11 by aspiazza

Working on API. Is being a pain in the ass but making my way. Will need to test optuna through the night once more to see if my fix worked.


Tuesday 2021-09-14 03:41:11 by Cooper Goddard

Sheep reeeeeally wants to fuck you up

Added animations for running and down time. Sheep Logic basically finished other than the need for Damage to the player within range.


Tuesday 2021-09-14 03:50:51 by Skissr01

Update usage.md

An Escort’s Guidelines on How To Dress Seductively

An enticing look begins in your brain and finishes with a decent appearance. The manner in which you feel will impact your entire behavior and the impressions you make. To accomplish an amazing look, the one that will make every single of your clients eating out of the palm of your hands, think about what might be the right clothing for you.

Regardless of whether you will probably look provocative, secretive, charming, wonderful, or bashful, ponder what sort of garments features the most awesome aspects of your body. Each plan, shading or material conceals its own alluring mysteries that make customers' brain detonate.

How about we see together the demonstrated tips recommended by Gold coast escorts, Adelaide escorts and Canberra escorts on how escorts can dress enticingly and get their customers!

  1. Allow your beautiful legs to talk

In the event that you have model legs, it is additionally important that you exercise assuming you need to look stunning, then, at that point don't spare a moment to wear a short close dress that draws the consideration. The ideal decisions would be dazzling bodycon and hot small gauze dresses, as these plans are incredibly alluring.

  1. Show your cleavage govern

You can basically never turn out badly with flaunting your superb cleavage! Men like seeing your smoothly full bosoms that support their imagination. A smart thought is to pick a side-boob or V-neck area shirt or dress that will divulge your delightful breasts. Walk ready for business toward him and he will be all yours without further ado!

  1. Make the illusion of being stripped

Transparent and net dresses play with the men's patience and just make them insane. They empower you to flaunt your body to the most extreme while as yet being covered. Net dresses that have panels as a rule have net on the essential spots, making it difficult for spectators to remain patient. And, that is your point, so get one of them!

  1. Upgrade your waistline with flares

There is something extremely uncommon and alluring in the "girl-next-door" outfit. A few customers love charming and innocent girls that make their lives fun. You will have a hard time believing how staggeringly hot it is to them when a young lady wears a flared bandage dress that outlines her bust line. These dresses or skirts particularly look extraordinary when you move as the flares move flawlessly as well, making you look all happy and positive.

  1. Put on tempting underwear

Picking seductive lingerie isn't just significant if your date winds up in bed. Realizing that you have something attractive, luxurious and silky under, both a bra and some pleasant straps, brings on your A-game, making you look much more enchanting.

  1. Shine bright like a diamond

Assuming you need to resemble a relentless diva then, at that point find some metallic or gold dress that will make the customer's jaws drop when you go into the room. They will worship you like a sexy goddess and a delightful lady that you are. Shiny dresses just shout style, making your entire figure ravishing and attractive. Men love ladies with great taste!


Tuesday 2021-09-14 04:20:31 by Vidith Balasa

Ok fixed most of the dumbass propogation of channel_name and game_id, I feel like Im spending more time correcting stupid decisions I made earlier than actually building new shit its kind of annoying. ALso I just dodged champ select and got a 12 hour q timer so that sucks too


Tuesday 2021-09-14 05:45:11 by Bailey

Merge pull request #2 from Lohmeier/bran

Create fuck you.txt


Tuesday 2021-09-14 07:52:05 by George Spelvin

lib/sort: make swap functions more generic

Patch series "lib/sort & lib/list_sort: faster and smaller", v2.

Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions.

The first three patches apply to lib/sort.c.

Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned multiples of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.)

Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch.

Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated.

lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort.

Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size).

Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge.

There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8477f with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes.

The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm.

TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy.

I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing.

This patch (of 5):

Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop.

Despite what ca96ab859ab4 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.)

The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series.

x86-64 code size 872 -> 886 bytes (+14)

With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven.

Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Rasmus Villemoes [email protected] Cc: Geert Uytterhoeven [email protected] Cc: Daniel Wagner [email protected] Cc: Don Mullis [email protected] Cc: Dave Chinner [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Tuesday 2021-09-14 07:52:05 by George Spelvin

lib/list_sort: optimize number of calls to comparison function

CONFIG_RETPOLINE has severely degraded indirect function call performance, so it's worth putting some effort into reducing the number of times cmp() is called.

This patch avoids badly unbalanced merges on unlucky input sizes. It slightly increases the code size, but saves an average of 0.2*n calls to cmp().

x86-64 code size 739 -> 803 bytes (+64)

Unfortunately, there's not a lot of low-hanging fruit in a merge sort; it already performs only nlog2(n) - Kn + O(1) compares. The leading coefficient is already at the theoretical limit (log2(n!) corresponds to K=1.4427), so we're fighting over the linear term, and the best mergesort can do is K=1.2645, achieved when n is a power of 2.

The differences between mergesort variants appear when n is not a power of 2; K is a function of the fractional part of log2(n). Top-down mergesort does best of all, achieving a minimum K=1.2408, and an average (over all sizes) K=1.248. However, that requires knowing the number of entries to be sorted ahead of time, and making a full pass over the input to count it conflicts with a second performance goal, which is cache blocking.

Obviously, we have to read the entire list into L1 cache at some point, and performance is best if it fits. But if it doesn't fit, each full pass over the input causes a cache miss per element, which is undesirable.

While textbooks explain bottom-up mergesort as a succession of merging passes, practical implementations do merging in depth-first order: as soon as two lists of the same size are available, they are merged. This allows as many merge passes as possible to fit into L1; only the final few merges force cache misses.

This cache-friendly depth-first merge order depends on us merging the beginning of the input as much as possible before we've even seen the end of the input (and thus know its size).

The simple eager merge pattern causes bad performance when n is just over a power of 2. If n=1028, the final merge is between 1024- and 4-element lists, which is wasteful of comparisons. (This is actually worse on average than n=1025, because a 1204:1 merge will, on average, end after 512 compares, while 1024:4 will walk 4/5 of the list.)

Because of this, bottom-up mergesort achieves K < 0.5 for such sizes, and has an average (over all sizes) K of around 1. (My experiments show K=1.01, while theory predicts K=0.965.)

There are "worst-case optimal" variants of bottom-up mergesort which avoid this bad performance, but the algorithms given in the literature, such as queue-mergesort and boustrodephonic mergesort, depend on the breadth-first multi-pass structure that we are trying to avoid.

This implementation is as eager as possible while ensuring that all merge passes are at worst 1:2 unbalanced. This achieves the same average K=1.207 as queue-mergesort, which is 0.2n better then bottom-up, and only 0.04n behind top-down mergesort.

Specifically, defers merging two lists of size 2^k until it is known that there are 2^k additional inputs following. This ensures that the final uneven merges triggered by reaching the end of the input will be at worst 2:1. This will avoid cache misses as long as 3*2^k elements fit into the cache.

(I confess to being more than a little bit proud of how clean this code turned out. It took a lot of thinking, but the resultant inner loop is very simple and efficient.)

Refs: Bottom-up Mergesort: A Detailed Analysis Wolfgang Panny, Helmut Prodinger Algorithmica 14(4):340--354, October 1995 https://doi.org/10.1007/BF01294131 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.5260

The cost distribution of queue-mergesort, optimal mergesorts, and power-of-two rules Wei-Mei Chen, Hsien-Kuei Hwang, Gen-Huey Chen Journal of Algorithms 30(2); Pages 423--448, February 1999 https://doi.org/10.1006/jagm.1998.0986 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.5380

Queue-Mergesort Mordecai J. Golin, Robert Sedgewick Information Processing Letters, 48(5):253--259, 10 December 1993 https://doi.org/10.1016/0020-0190(93)90088-q https://sci-hub.tw/10.1016/0020-0190(93)90088-Q

Feedback from Rasmus Villemoes [email protected].

Link: http://lkml.kernel.org/r/fd560853cc4dca0d0f02184ffa888b4c1be89abc.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Daniel Wagner [email protected] Cc: Dave Chinner [email protected] Cc: Don Mullis [email protected] Cc: Geert Uytterhoeven [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Tuesday 2021-09-14 07:57:49 by zayn_nissan

Added Task

You can learn a lot about algorithms by implementing them. Implementing an algorithm clarifies the problem being solved and can make complexity analysis an easier task. You’re a civil engineer and you’re trying to figure out the best way to arrange for internet access in Bangladesh. There are N (3 ≤ N ≤ 250,000) towns on Bangladesh connected by M (N ≤ M ≤ 250, 000) various roads and you can walk between any two towns on by traversing Bangladesh some sequence of roads. However, you’ve got a limited budget and have determined that the cheapest way to arrange for internet access is to build some fiber-optic cables along existing roadways. You have a list of the costs of laying fiber-optic cable down along any road and want to figure out how much money you’ll need to successfully complete the project–meaning that, at the end, every town will be connected along some sequence of fiber-optic cables. Luckily, you’re also a computer scientist, and you remember hearing about kruskal's algorithm and Prim’s algorithm in one of your old classes. These algorithms are exactly the solution to your problem. If this scenario is still not totally clear, look at the sample input description on the next page. Our input data describing the graph will be arranged as a list of edges (roads and their fiber-optic cost) and for our program we’ll covert that to an adjacency list: for every node in the graph (town in the country), we’ll have a list of the nodes (towns) it’s connected to and the weight (cost of building fiber- optic cable along).
adj[0]→(1, 1.0)(3, 3.0)
adj[1]→(0, 1.0)(2, 6.0)(3, 5.0)(4,1.0)
.
Input Format Line 1: Two space-separated integers: N, the number of nodes in the graph, and M, the number of edges. Lines 2...M: Line i contains three space-separated numbers describing an edge: si and ti , the IDs of the two nodes involved, and wi , the weight of the edge.

CSE 373 Programming Assignment Summer 2021

Input (input.txt) :
6 9 0 1 1.0 1 3 5.0 3 0 3.0 3 4 1.0 1 4 1.0 1 2 6.0 5 2 2.0 2 4 4.0 5 4 4.0

Input Explanation We can visualize this graph as in Figure 1; let A,B,C ,... represent the nodes with IDs 0,1,2,... respectively. Looking at our input file, we can see that the second line 0 1 1.0 describes edge between A and B of weight 1.0 in our diagram, the third line 1 3 5.0 describes the edge between B and D of weight 5.0, and so on. On the right, we can see a minimum spanning tree for our graph. Every vertex lies in one totally connected component, and the edges here sum to 1.0+1.0+1.0+4.0+2.0 = 9.0, which will be our program’s output.

Output Format
Line 1: A single floating-point number printed to at least 6 decimal digits of precision, representing the total weight of a minimum spanning tree for the provided graph. Line 2…N-1: Line i contains three space-separated numbers describing an edge: si and ti , the IDs of the two nodes involved, and wi , the weight of the edge that create the minimum spanning tree.

CSE 373 Programming Assignment Summer 2021

Output:
9.000000 0 1 1.0 3 4 1.0 1 4 1.0 5 2 2.0 5 4 4.0

Ethical Issues

Since all of you will be doing the same assignment, experience tells us that there is high chance of copying. Besides this is a very common programming problem. You can easily find the solution online.
Let us warn you that any case of plagiarism (copying) will be handled severely with nearly zero tolerance and may even result in suspension from the course irrespective of whether you were the server (source of code) or the client (who copied the code).

Submission Guidelines
• You can use any programming language for this assignment. But you must write all the functions by yourself. • You must submit all the codes/files used in this assignment. There must be filed name “readme.txt” explaining how to run the code and “analysis.txt” describing the run time of your algorithm • During the final viva, you need to share the code and explain it. Please prepare accordingly.


Tuesday 2021-09-14 08:32:02 by Lisena

ok github you have my permission to shut the fuck up now you piece of shit


Tuesday 2021-09-14 09:50:46 by kiara101

9 wardrobe essentials every man should own

Wardrobe essentials are as essential as anything in the world. We are spoilt for choices, be it any of them. The biggest choice in life might be the girl you want to date; the car you want to drive or probably the world tour you've been longing for. If you would take our suggestion, we assume the biggest choices and the primary one will be your wardrobe choices.

So, get amazing outfit ideas at https://smashmart.in/


Tuesday 2021-09-14 09:55:42 by Trent W. Buck

Allow SSH into the image, trusting my keys

Wow this turned out to be messy and awful.

Lots of arguing about the least-ugly way to get URL contents into authorized_keys. It's not my favourite, but eh.

wget -O- https://github.com/{alice,bob}.keys >>root/.ssh/authorized_keys
fakeroot sh -xec 'chown -h 0:0 authorized_keys;
                  chmod 0700 authorized_keys;
                  tar -cf tmp.tar --transform=s#^#root/.ssh/# authorized_keys'

I wanted -net user,... to use JSON format, but

17:42 <stefanha> twb: No, I don't think any of the network options support JSON.
17:42 <th_huth> twb: IIRC -net does not support json

I wanted hostfwd to use a unix socket, but

17:43 <twb> Can I hostfwd to a AF_UNIX socket file instead of a AF_SOCK port?
17:47 <stefanha> twb: hostfwd forwards TCP/UDP from host to guest. guestfwd forwards TCP/UDP from the guest to a host QEMU character device or external command (e.g. netcat).
17:48 <stefanha> twb: In the guestfwd case you can probably use AF_UNIX using the guestfwd chardev syntax.
17:48 <stefanha> twb: The hostfwd case is limited to host TCP/UDP only.
17:50 <twb> Hrm.
17:50 <twb> So I could actually do something like telling qemu to just run ssh as the guestfwd command
17:51 <twb> My use case is I'm building OS images, and at the end I often want to ssh into it to poke around.  If I hard-code a hostfwd=::2022-:22  then I can't run >1 at a time
17:59 <stefanha> twb: You could use the pid of QEMU's parent process (the program that spawns QEMU) as the port number plus an offset to make sure it's a high numbered port.
17:59 <twb> Yeah I think I won't bother trying to be clever
18:00 <twb> But it sure would've been nice if I could've just said "forward 22 to ./ssh.sock" and then "ssh ./ssh.sock"
18:01 <stefanha> twb: I'm not sure ssh(1) supports connecting to a UNIX domain socket.
18:02 <asarch> One stupid question: if the AMD processor support AMD-Vi, is it shown in "Virtualization:" part of the lscpu output or is it shown as the svm flag?
18:03 <asarch> I mean, I only see AMD-V in the virtualization part and svm_lock in the flags list
18:03 <twb> stefanha: yeah that's the other reason I'm not bothering :-)
18:05 <stefanha> twb: Maybe -netdev tap?
18:08 <twb> Doesn't that require CAP_NET_ADMIN or something?  I'm an unprivileged user.
18:14 <stefanha> twb: There are ways of using it as an unprivileged user if the host admin has set that up for you (qemu-bridge-helper or libvirt with a security policy that lets you do that). Maybe also via network namespaces.
18:14 <stefanha> twb: Maybe the bash $$ (shell pid) idea is the simplest in the end.

I wanted to use /etc/dropbear/authorized_keys, but:

16:53 <twb> You know how on OpenWRT, dropbear looks in /etc instead of /root for authorized_keys
16:53 <twb> Is that a compile-time option, or what
17:18 <russell--> twb: it appears to be hardcoded in package/network/services/dropbear/patches/100-pubkey_path.patch
17:19 <twb> Thanks
17:20 <twb> (For context, I wanted to do the same in Debian, and wasn't sure if it was an OpenWRT-ism, or an upstream feature that was undocumented)
17:21 <SignumFera> twb: As far as I know openssh also have the config option to look at the authorized_keys in /etc/openssh/
17:21 <twb> SignumFera: they do.  I am using dropbear though
17:21 <SignumFera> Aaah. the penny drops for me
17:21 <twb> Well ACTUALLY plan A is to use tinysshd
17:21 <twb> then dropbear, then openssh if all else fails
17:22 <SignumFera> Out of curiocity, why not use openssh? Too heavy?
17:22 <twb> Yes, and it supports things I then have to disable, like passwords
17:23 <SignumFera> Makes sense
17:23 <twb> Oh and openssh also needs a hack to only make host keys when it should

PLEASE can I just move on to the next one-liner?

This literally started out as this:

ps=(openssh-server ...)
>$t/root/.ssh/authorized_keys curl http://www.cyber.com.au/~{twb,mattcen,russm,mike,ron,djk}/.ssh/authorized_keys
>>$t/etc/ssh/sshd_config printf %s\\n 'PasswordAuthentication no' 'AllowUsers root'
>$t/etc/systemd/system/keygen.service   printf %s\\n [Service] Type=oneshot ExecStart='/usr/bin/ssh-keygen -A'
>>$t/lib/systemd/system/ssh.service     printf %s\\n [Unit] Wants=keygen.service After=keygen.service
>$t/etc/tmpfiles.d/lastlog.conf         echo f /var/log/lastlog 664 root utmp  # Avoid pam_lastlog.so warning on SSH
chroot $t dpkg-statoverride --update --add root root 700 /usr/bin/ssh-keygen
exclusions=(... '^etc$/^ssh$/^ssh_host_.*_key(.pub)?$' ...)

OK a BIT more than one line. Maybe I don't feel so bad about this current shit-show.


Tuesday 2021-09-14 11:10:35 by Douglas Anderson

serial: core: Allow processing sysrq at port unlock time

[ Upstream commit d6e1935819db0c91ce4a5af82466f3ab50d17346 ]

Right now serial drivers process sysrq keys deep in their character receiving code. This means that they've already grabbed their port->lock spinlock. This can end up getting in the way if we've go to do serial stuff (especially kgdb) in response to the sysrq.

Serial drivers have various hacks in them to handle this. Looking at '8250_port.c' you can see that the console_write() skips locking if we're in the sysrq handler. Looking at 'msm_serial.c' you can see that the port lock is dropped around uart_handle_sysrq_char().

It turns out that these hacks aren't exactly perfect. If you have lockdep turned on and use something like the 8250_port hack you'll get a splat that looks like:

WARNING: possible circular locking dependency detected [...] is trying to acquire lock: ... (console_owner){-.-.}, at: console_unlock+0x2e0/0x5e4

but task is already holding lock: ... (&port_lock_key){-.-.}, at: serial8250_handle_irq+0x30/0xe4

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&port_lock_key){-.-.}: _raw_spin_lock_irqsave+0x58/0x70 serial8250_console_write+0xa8/0x250 univ8250_console_write+0x40/0x4c console_unlock+0x528/0x5e4 register_console+0x2c4/0x3b0 uart_add_one_port+0x350/0x478 serial8250_register_8250_port+0x350/0x3a8 dw8250_probe+0x67c/0x754 platform_drv_probe+0x58/0xa4 really_probe+0x150/0x294 driver_probe_device+0xac/0xe8 __driver_attach+0x98/0xd0 bus_for_each_dev+0x84/0xc8 driver_attach+0x2c/0x34 bus_add_driver+0xf0/0x1ec driver_register+0xb4/0x100 __platform_driver_register+0x60/0x6c dw8250_platform_driver_init+0x20/0x28 ...

-> #0 (console_owner){-.-.}: lock_acquire+0x1e8/0x214 console_unlock+0x35c/0x5e4 vprintk_emit+0x230/0x274 vprintk_default+0x7c/0x84 vprintk_func+0x190/0x1bc printk+0x80/0xa0 __handle_sysrq+0x104/0x21c handle_sysrq+0x30/0x3c serial8250_read_char+0x15c/0x18c serial8250_rx_chars+0x34/0x74 serial8250_handle_irq+0x9c/0xe4 dw8250_handle_irq+0x98/0xcc serial8250_interrupt+0x50/0xe8 ...

other info that might help us debug this:

Possible unsafe locking scenario:

     CPU0                    CPU1
     ----                    ----
lock(&port_lock_key);
                             lock(console_owner);
                             lock(&port_lock_key);
lock(console_owner);

*** DEADLOCK ***

The hack used in 'msm_serial.c' doesn't cause the above splats but it seems a bit ugly to unlock / lock our spinlock deep in our irq handler.

It seems like we could defer processing the sysrq until the end of the interrupt handler right after we've unlocked the port. With this scheme if a whole batch of sysrq characters comes in one irq then we won't handle them all, but that seems like it should be a fine compromise.

Signed-off-by: Douglas Anderson [email protected] Signed-off-by: Greg Kroah-Hartman [email protected] Signed-off-by: Sasha Levin [email protected]


Tuesday 2021-09-14 13:18:16 by SinguloBot

[MIRROR] 1984: Suppressed Bans (#296)

  • 1984: Suppressed Bans (#5251)

  • Suppressor Permission Critical fix to ban notes

  • Confirmation alert

  • Allow Temp Bans Relevant terminology changes

  • I totally did not not do this at 3am

  • FUCK I FORGOT TO UPDATE THE UI TGUI SUCKS BUT HOLY SHIT THIS STUFF IS /AWFUL/

  • irc>tgs

  • 1984: Suppressed Bans

Co-authored-by: Gallyus [email protected]


Tuesday 2021-09-14 15:29:02 by Nikhil Benesch

*: reduce all "deny" and "forbid" lints to "warn"

Lints at the deny/forbid level can be very frustrating when developing, because you're forced to comply with the lint even when you're still prototyping, just to get your code to compile.

This commit downgrades all these lints to "warn", to optimize for the development experience. In CI, these lints are still enforced at the "deny" level by our Clippy configuration, which upgrades all warnings to hard errors.

The one glitch here is that forbid used to prevent submodules from re-enabling the forbidden lint. I propose that we solve this socially, by making it taboo to add #![allow(missing_docs)] unless you've gotten buy-in from the overall owner of that area of the codebase.


Tuesday 2021-09-14 18:05:14 by Simon-mwaura

Create ABOUT ME

About me My name is Simon. I am a professional tutor with a diverse skill set spanning from a six years’ experience of professional academic writing. I have an unending passion for freelance writing which has grown over the years. I am an MSc Finance and BCom. in Accounting and Finance holder. I am also a Certified Public Accountant. This area of study gave me an academic experience that equipped me with analytical, problem solving and critical thinking skills. I have used my knowledge and expertise in this field to help students with their assignments in the related disciplines. However, as a clients’ favorite writer, I have always received requests to assist in other disciplines, which I have been able to execute with success. As such, I have advanced to become an all-round accomplished writer and Tutor. As a top-rated Tutor/writer, I am also fully conversant with different writing styles including MLA, APA, Harvard, Chicago, OSCOLA, OXFORD, AMA and IEEE.

So far, I have helped over a thousand clients with various assignments including essays, personal statements, admission essays, resumes, research projects, theses and dissertations among others. I have successfully completed more than a thousand orders and achieved a clients’ satisfaction rate of 99.9% as evidenced by positive reviews on my profile. I attribute this performance partly to my analytical skills and keenness to work. My insistence on quality writing has also earned me a reputation among my clients leading to referrals; this has positively influenced my growth as a writer. I expect my clients to realize value for their resources by presenting excellent and original papers.

On a personal level, I am friendly and ready to help, and more so in any type of work. I am also an honest and diligent, hence the positive reaction by clients. My honesty and diligence has been important in maintaining long-term relations with most of my clients. I maintain a smooth communication between the clients and myself in case of a concern and see to it that the issue is resolved amicably. I have never been demoted, a justification of my passion and commitment. I always avoid plagiarism and ensure a client’s work is delivered promptly and with the desired quality. To the client, I am the most preferred Tutor/writer. Click on my profile and send me an invite to your order, I will respond shortly. Education Master of Science in Finance- Jomo Kenyatta University of Technology Dedan Kimathi University of Technology-Kenya, Bachelor, Accounting and Finance Languages English, Kiswahili


Tuesday 2021-09-14 19:51:05 by speedie

Uploaded first batch of "official channels"

I uploaded the first batch of official channels.. Many more to come! Keep in mind that I can't upload everything here due to file size limits.. (fuck you github) so I'll mirror everything and more onto OneDrive.


Tuesday 2021-09-14 20:11:26 by Alex Crichton

Add *_unchecked variants of Func APIs for the C API

This commit is what is hopefully going to be my last installment within the saga of optimizing function calls in/out of WebAssembly modules in the C API. This is yet another alternative approach to #3345 (sorry) but also contains everything necessary to make the C API fast. As in #3345 the general idea is just moving checks out of the call path in the same style of TypedFunc.

This new strategy takes inspiration from previously learned attempts effectively "just" exposes how we previously passed *mut u128 through trampolines for arguments/results. This storage format is formalized through a new ValRaw union that is exposed from the wasmtime crate. By doing this it made it relatively easy to expose two new APIs:

  • Func::new_unchecked
  • Func::call_unchecked

These are the same as their checked equivalents except that they're unsafe and they work with *mut ValRaw rather than safe slices of Val. Working with these eschews type checks and such and requires callers/embedders to do the right thing.

These two new functions are then exposed via the C API with new functions, enabling C to have a fast-path of calling/defining functions. This fast path is akin to Func::wrap in Rust, although that API can't be built in C due to C not having generics in the same way that Rust has.

For some benchmarks, the benchmarks here are:

  • nop - Call a wasm function from the host that does nothing and returns nothing.
  • i64 - Call a wasm function from the host, the wasm function calls a host function, and the host function returns an i64 all the way out to the original caller.
  • many - Call a wasm function from the host, the wasm calls host function with 5 i32 parameters, and then an i64 result is returned back to the original host
  • i64 host - just the overhead of the wasm calling the host, so the wasm calls the host function in a loop.
  • many host - same as i64 host, but calling the many host function.

All numbers in this table are in nanoseconds, and this is just one measurement as well so there's bound to be some variation in the precise numbers here.

Name Rust C (before) C (after)
nop 19 112 25
i64 22 207 32
many 27 189 34
i64 host 2 38 5
many host 7 75 8

The main conclusion here is that the C API is significantly faster than before when using the *_unchecked variants of APIs. The Rust implementation is still the ceiling (or floor I guess?) for performance The main reason that C is slower than Rust is that a little bit more has to travel through memory where on the Rust side of things we can monomorphize and inline a bit more to get rid of that. Overall though the costs are way way down from where they were originally and I don't plan on doing a whole lot more myself at this time. There's various things we theoretically could do I've considered but implementation-wise I think they'll be much more weighty.


Tuesday 2021-09-14 22:04:29 by gagan sidhu

update openvpn+openssl+unbound n shit (47451 update)

The school gym is gutted by the fire incited by founders of the Washington Redskins Startup, with their graffiti "I'M OUT PEACE" and "SO LONG SUKKAS" scrawled on the walls that reflected their confidence at the time. The fourth graders are inside during their PE period. The girls jump rope, the boys try to make baskets through a badly damaged hoop. Other boys toss basketballs at each other, while Stan, Kyle, Cartman, and Kenny sit on the burnt bleachers.

Kyle I don't know what we're going to do? It's been like four hours and people still won't talk to us. Kenny (Right. What the fuck is going on?) Cartman You know what we gotta do, guys? [gets off the bleachers] We've gotta throw a big fuckin' party Kenny (A party?!) Cartman Yeah! How do you make everyone like you? You have a big party and invite everyone and then everyone thinks you're cool! Kyle Dude, that would have to be like, the best party ever. Cartman Well I'm down. Between the four of us we can throw the sweetest party ever, and these assholes won't even remember us being dicks to them. Kyle [joins him on the floor] Hey, that might work. But it can't be a party for us. Cartman Right, it's gotta be an awesome party for... Stan [joins them on the floor] For someone that we love who needs us and that we refuse to bail on! Cartman What? Kyle No no, he's right! We've gotta make it for someone in need so that people have to go. Cartman We lure people in with a cause and then hit 'em over the head with the best party ever. We're gonna have pizza and cake and a sweet band!

if you ever ditch your friends the way these guys did for that snazzy startup JAB in silicon valley, just follow the plan above to make amends.

  • update openssl to v3, fucking useless thing adds ~400K to the build and i can't see any benefit. waiting for ol' assfuck to get clever and reduce the size of this POS.
  • update openvpn, dnsmasq and unbound.
  • update tor and asterisk for the bigger builds

< 2021-09-14 >