1,710,818 events recorded by gharchive.org of which 1,710,818 were push events containing 2,690,706 commit messages that amount to 196,505,753 characters filtered with words.py@e23d022007... to these 41 messages:
Exploration PP - Reworks Outpost Nuke Announcement (#450)
-
Fuck you, die
-
Update nuke_ruin.dm
Bagel Update 13.7 (#8690)
-
fuck shit ass shit
-
Add files via upload
Updates Pubby (Fire Alarm Edition) (#43)
I stress-tested pubby on production today, and everything seemed chill. Few things though:
A) They still have a BZ Can in Xenobiology. However, this should stay until someone bothers to put in a cold chamber to Ordnance.
B) Very sparse fire alarms, and the areas are way too massive in the central primary hallways in relation to the firelocks it has. I'll recitfy that now.
C) Monastary Exterior Decalling fucking sucks. I'm fixing that by using plating sand instead of whatever other sand they were using.
-
Defines Cargo Lobby Area
-
Moves stuff out from being next to firelocks
-
Split Up Chapel Areas
-
Upper Cargo Warehouse is now Drone Bay
-
Area Misc Fixes (ass line)
-
Maybe some other stuff
Update Time: Machinery to People (#2096)
- Update Time: Machinery to People
Added a recorders and server racks all over the city.
Slightly changed a "Cheap Motel" near Police Dept.
Slightly changed Police Dept.
Slightly updated Chemical Factory and Weather Station.
- Update time: small fixes
Changed a servers on Power Plant.
- Update Time: that god damn room in PD
I hope we're done with it.
- Update Time: small fix
Removed a potential feature with "shutter trap" in PD.
- Update Time: fixes and updating Water Treatment Station
You made me do this, Original.
- Update Time: one day the south dir comes, we'll place our stuff and go
Sometimes you get too picky
Co-authored-by: Edward Nashton [email protected]
[MIRROR] Fixes an error sprites on medical wintercoat's allowed list. [MDB IGNORE] (#13566)
- Goodbye stack/medical (#66898)
Okay, why removing instead of giving it a sprite?
Simply put, those items are all small and there is no reason that you need to quick draw a suture/ointment and if you do, the medical belt can carry 7. Allowed/exoslot items should be either medium/big/bulky sized items (Syringe gun) to make it worth inventory wise or items that you can quickdraw multiple times (Health Analyzer) to make your life easier. Medical stacks are neither and would just get in the way if you try to quickly put them into a bag/pocket/belt and instead it goes into your exoslot where you would normally want to carry more valuable things like the syringe gun.
This doesn't feel big enough for a fix, spending 5 seconds making a list alphabetical doesn't few worth of code improvement, I will label this as QoL and if someone say it is a balance change I will follow you in game and keep placing shitty small items in your inventory via reverse pickpocketing.
- Fixes an error sprites on medical wintercoat's allowed list.
Co-authored-by: GuillaumePrata [email protected]
Moves the FUCKING LIGHT FIXTURES on tiles with surgery beds (#67644)
Moves around some wall objects in surgery rooms on both Meta and Box, primarily so that there aren't light fixtures on the same tiles as surgery beds. I moved a few unrelated things for QOL.
EVERY MOTHER FUCKING TIME I DO SURGERY I ALWAYS SMASH THE FUCKING LIGHT TUBE BY ACCIDENT AND IT PISSES ME THE FUCK OFF. WHY WOULD YOU PUT A THING THERE THAT JUTS OUT OVER THE FUCKING BED AND GETS IN THE WAY OF CLICKING ON THE SPACEMAN SPRITE FUCK GOD DAMN IT.
Probers gain points per kill, Myrmurs break doors.
Fuck you <3
Refactors how legs are displayed so they no longer appear above one-another when looking EAST or WEST (#66607) (#704)
So, for over 5 years, left legs have been displaying over right legs. Never noticed it? Don't blame you. Here's a nice picture provided by #20603 (Bodypart sprites render with incorrect layering), that clearly displays the issue that was happening:
It still happened to this day. Notice how the two directions don't look the same? That's because the left leg is always displaying above the right one.
Obviously, that's no good, and I was like "oh, that's a rendering issue, so there's nothing I can do about it, it's an issue with BYOND".
Until it struck me.
"What if we used a mask that would cut out the parts of the right leg, from the left leg, so that it doesn't actually look as if it's above it?"
Here I am, after about 25 hours of work, 15 of which of very painful debugging due to BYOND's icon documentation sucking ass.
So, how does it work?
Basically, we create a mask of a left leg (that'll be explained later down the line), more specifically, a cutout of JUST the WEST dir of the left leg, with every other dir being just white squares. We then cache that mask in a static list on the right leg, so we don't generate it every single time, as that can be expensive. All that happens in update_body_parts(), where I've made it so legs are handled separately, to avoid having to generate limb icons twice in a row, due to it being expensive. In that, when we generate_limb_icon() a right leg, we apply the proper left leg mask if necessary.
Now, why masking the right leg, if the issue was the left leg? Because, see, when you actually amputated someone, and gave them a leg again, it would end up being that new leg that would be displayed below the other leg. So I fixed that, by making it so that bodyparts would be sorted correctly, before the end of update_body_parts(). Which means that right legs ended up displaying above left legs, which meant that I had to change everything I had written to work on right legs rather than left legs.
I spent so much time looking up BYOND documentation for MapColors() and filters and all icon and image vars and procs, I decided to make a helper proc called generate_icon_alpha_mask(), because honestly it would've saved me at least half a day of pure code debugging if I had it before working on this refactor.
I tried to put as much documentation down as I could, because this shit messes with your brain if you spend too long looking at it. icon and image are two truly awful classes to work with, and I don't look forward to messing with them more in the future.
Anyway. It's nice, because it requires no other effort from anyone, no matter what the shape of the leg is actually like. It's all handled dynamically, and only one per type of leg, meaning that it's not actually too expensive either, which is very nice. Especially since it's very downstreams-friendly from being done this way.
It fixes #20603 (Bodypart sprites render with incorrect layering), an issue that has been around for over half a decade, as well as probably many more issues that I just didn't bother sifting through.
Plus, it just looks so much better.
Co-authored-by: GoldenAlpharex [email protected]
Parallax but better: Smooth movement cleanup (#66567) (#724)
- Alright, so I'm optimizing parallax code so I can justify making it do a bit more work
To that end, lets make the checks it does each process event based. There's two. One is for a difference in view, which is an easy fix since I added a view setter like a year back now.
The second is something planets do when you change your z level. This gets more complicated, because we're "owned" by a client. So the only real pattern we can use to hook into the client's mob's movement is something like connect_loc_behalf.
So, I've made connect_mob_behalf. Fuck you.
This saves a proc call and some redundant logic
- Fixes random parallax stuttering
Ok so this is kinda a weird one but hear me out.
Parallax has this concept of "direction" that some areas use, mostly the shuttle transit ones. Set when you move into a new area. So of course it has a setter. If you pass it a direction that it doesn't already have, it'll start up the movement animation, and disable normal parallax for a bit to give it some time to get going.
This var is typically set to 0.
The problem is we were setting /area/space's direction to null in shuttle movement code, because of a forgotten proc arg.
Null is of course different then 0, so this would trigger a halt in parallax processing.
This causes a lot of strange stutters in parallax, mostly when you're moving between nearspace and space. It looks really bad, and I'm a bit suprised none noticed.
I've fixed it, and added a default arg to the setter to prevent this class of issue in future. Things look a good bit nicer this way
- Adds animation back to parallax
Ok so like, I know this was removed and "none could tell" and whatever, and in fairness this animation method is a bit crummy.
What we really want to do is eliminate "halts" and "jumps" in the parallax moveemnt. So it should be smooth.
As it is on live now, this just isn't what happens, you get jumping between offsets. Looks frankly, horrible. Especially on the station.
Just what I've done won't be enough however, because what we need to do is match our parallax scroll speed with our current glide speed. I need to figure out how to do this well, and I have a feeling it will involve some system of managing glide sources.
Anyway for now the animation looks really nice for ghosts with default (high) settings, since they share the same delay.
I've done some refactoring to how old animation code worked pre (4b04f9012d1763df625e9e4ae75e4cf4bd1f3771). Two major changes tho.
First, instead of doing all the animate checks each time we loop over a layer, we only do the layer dependant ones. This saves a good bit of time.
Second, we animate movement on absolute layers too. They're staying in the same position, but they still move on the screen, so we do the same gental leaning. This has a very nice visual effect.
Oh and I cleaned up some of the code slightly.
Co-authored-by: LemonInTheDark [email protected]
HOLY SHIT SHUT UP (#742)
-
HOLY SHIT SHUT UP
-
Apply suggestions from code review
-
seeba sauce
order screen is working thank GOD
still need to set up the state flow and all that other shit but jesus christ this was a pain
random: use linear min-entropy accumulation crediting
commit c570449094844527577c5c914140222cb1893e3f upstream.
30e37ec516ae ("random: account for entropy loss due to overwrites") assumed that adding new entropy to the LFSR pool probabilistically cancelled out old entropy there, so entropy was credited asymptotically, approximating Shannon entropy of independent sources (rather than a stronger min-entropy notion) using 1/8th fractional bits and replacing a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly underestimate it. This wasn't superb, but it was perhaps better than nothing, so that's what was done. Which entropy specifically was being cancelled out and how much precisely each time is hard to tell, though as I showed with the attack code in my previous commit, a motivated adversary with sufficient information can actually cancel out everything.
Since we're no longer using an LFSR for entropy accumulation, this probabilistic cancellation is no longer relevant. Rather, we're now using a computational hash function as the accumulator and we've switched to working in the random oracle model, from which we can now revisit the question of min-entropy accumulation, which is done in detail in https://eprint.iacr.org/2019/198.
Consider a long input bit string that is built by concatenating various smaller independent input bit strings. Each one of these inputs has a designated min-entropy, which is what we're passing to credit_entropy_bits(h). When we pass the concatenation of these to a random oracle, it means that an adversary trying to receive back the same reply as us would need to become certain about each part of the concatenated bit string we passed in, which means becoming certain about all of those h values. That means we can estimate the accumulation by simply adding up the h values in calls to credit_entropy_bits(h); there's no probabilistic cancellation at play like there was said to be for the LFSR. Incidentally, this is also what other entropy accumulators based on computational hash functions do as well.
So this commit replaces credit_entropy_bits(h) with essentially total = min(POOL_BITS, total + h)
, done with a cmpxchg loop as before.
What if we're wrong and the above is nonsense? It's not, but let's assume we don't want the actual behavior of the code to change much. Currently that behavior is not extracting from the input pool until it has 128 bits of entropy in it. With the old algorithm, we'd hit that magic 128 number after roughly 256 calls to credit_entropy_bits(1). So, we can retain more or less the old behavior by waiting to extract from the input pool until it hits 256 bits of entropy using the new code. For people concerned about this change, it means that there's not that much practical behavioral change. And for folks actually trying to model the behavior rigorously, it means that we have an even higher margin against attacks.
Cc: Theodore Ts'o [email protected] Cc: Dominik Brodowski [email protected] Cc: Greg Kroah-Hartman [email protected] Reviewed-by: Eric Biggers [email protected] Reviewed-by: Jean-Philippe Aumasson [email protected] Signed-off-by: Jason A. Donenfeld [email protected] Signed-off-by: Greg Kroah-Hartman [email protected]
Update git submodules
- Update kolla-ansible from branch 'master'
to 49422da8d90aa74c6a3aaf3ada66c6519aee4d5a
-
Merge "[CI] Nullify attempts"
-
[CI] Nullify attempts
Per Clark Boylan's feedback [1], retries cause a retry not only for pre playbook failures but also for cases where Ansible detects network connectivity issues and they are caused by disks getting filled to their fullest. This is an issue we experience that sometimes results in a POST_FAILURE but certain FAILUREs are retried which wastes CI resources. The problematic jobs are ceph jobs. They are to be looked into. Backport to all branches. We can adjust retries for the core jobs that do not exhibit the nasty behaviour but first we can try running without retries to measure the troublesomeness.
[1] https://review.opendev.org/c/openstack/kolla-ansible/+/843536
Change-Id: I32fc296083b4881e8f457f4235a32f94ed819d9f
-
random: credit cpu and bootloader seeds by default
This commit changes the default Kconfig values of RANDOM_TRUST_CPU and RANDOM_TRUST_BOOTLOADER to be Y by default. It does not change any existing configs or change any kernel behavior. The reason for this is several fold.
As background, I recently had an email thread with the kernel maintainers of Fedora/RHEL, Debian, Ubuntu, Gentoo, Arch, NixOS, Alpine, SUSE, and Void as recipients. I noted that some distros trust RDRAND, some trust EFI, and some trust both, and I asked why or why not. There wasn't really much of a "debate" but rather an interesting discussion of what the historical reasons have been for this, and it came up that some distros just missed the introduction of the bootloader Kconfig knob, while another didn't want to enable it until there was a boot time switch to turn it off for more concerned users (which has since been added). The result of the rather uneventful discussion is that every major Linux distro enables these two options by default.
While I didn't have really too strong of an opinion going into this thread -- and I mostly wanted to learn what the distros' thinking was one way or another -- ultimately I think their choice was a decent enough one for a default option (which can be disabled at boot time). I'll try to summarize the pros and cons:
Pros:
-
The RNG machinery gets initialized super quickly, and there's no messing around with subsequent blocking behavior.
-
The bootloader mechanism is used by kexec in order for the prior kernel to initialize the RNG of the next kernel, which increases the entropy available to early boot daemons of the next kernel.
-
Previous objections related to backdoors centered around Dual_EC_DRBG-like kleptographic systems, in which observing some amount of the output stream enables an adversary holding the right key to determine the entire output stream.
This used to be a partially justified concern, because RDRAND output was mixed into the output stream in varying ways, some of which may have lacked pre-image resistance (e.g. XOR or an LFSR).
But this is no longer the case. Now, all usage of RDRAND and bootloader seeds go through a cryptographic hash function. This means that the CPU would have to compute a hash pre-image, which is not considered to be feasible (otherwise the hash function would be terribly broken).
-
More generally, if the CPU is backdoored, the RNG is probably not the realistic vector of choice for an attacker.
-
These CPU or bootloader seeds are far from being the only source of entropy. Rather, there is generally a pretty huge amount of entropy, not all of which is credited, especially on CPUs that support instructions like RDRAND. In other words, assuming RDRAND outputs all zeros, an attacker would still have to accurately model every single other entropy source also in use.
-
The RNG now reseeds itself quite rapidly during boot, starting at 2 seconds, then 4, then 8, then 16, and so forth, so that other sources of entropy get used without much delay.
-
Paranoid users can set random.trust_{cpu,bootloader}=no in the kernel command line, and paranoid system builders can set the Kconfig options to N, so there's no reduction or restriction of optionality.
-
It's a practical default.
-
All the distros have it set this way. Microsoft and Apple trust it too. Bandwagon.
Cons:
-
RDRAND could still be backdoored with something like a fixed key or limited space serial number seed or another indexable scheme like that. (However, it's hard to imagine threat models where the CPU is backdoored like this, yet people are still okay making any computations with it or connecting it to networks, etc.)
-
RDRAND could be defective, rather than backdoored, and produce garbage that is in one way or another insufficient for crypto.
-
Suggesting a reduction in paranoia, as this commit effectively does, may cause some to question my personal integrity as a "security person".
-
Bootloader seeds and RDRAND are generally very difficult if not all together impossible to audit.
Keep in mind that this doesn't actually change any behavior. This is just a change in the default Kconfig value. The distros already are shipping kernels that set things this way.
Ard made an additional argument in [1]:
We're at the mercy of firmware and micro-architecture anyway, given
that we are also relying on it to ensure that every instruction in
the kernel's executable image has been faithfully copied to memory,
and that the CPU implements those instructions as documented. So I
don't think firmware or ISA bugs related to RNGs deserve special
treatment - if they are broken, we should quirk around them like we
usually do. So enabling these by default is a step in the right
direction IMHO.
In [2], Phil pointed out that having this disabled masked a bug that CI otherwise would have caught:
A clean 5.15.45 boots cleanly, whereas a downstream kernel shows the
static key warning (but it does go on to boot). The significant
difference is that our defconfigs set CONFIG_RANDOM_TRUST_BOOTLOADER=y
defining that on top of multi_v7_defconfig demonstrates the issue on
a clean 5.15.45. Conversely, not setting that option in a
downstream kernel build avoids the warning
[1] https://lore.kernel.org/lkml/CAMj1kXGi+ieviFjXv9zQBSaGyyzeGW_VpMpTLJK8PJb2QHEQ-w@mail.gmail.com/ [2] https://lore.kernel.org/lkml/[email protected]/
Cc: Theodore Ts'o [email protected] Reviewed-by: Ard Biesheuvel [email protected] Signed-off-by: Jason A. Donenfeld [email protected]
gdb/testsuite: remove definition of true/false from gdb_compiler_info
Since pretty much forever the get_compiler_info function has included these lines:
# Most compilers will evaluate comparisons and other boolean
# operations to 0 or 1.
uplevel \#0 { set true 1 }
uplevel \#0 { set false 0 }
These define global variables true (to 1) and false (to 0).
It seems odd to me that these globals are defined in get_compiler_info, I guess maybe the original thinking was that if a compiler had different true/false values then we would detect it there and define true/false differently.
I don't think we should be bundling this logic into get_compiler_info, it seems weird to me that in order to use $true/$false a user needs to first call get_compiler_info.
It would be better I think if each test script that wants these variables just defined them itself, if in the future we did need different true/false values based on compiler version then we'd just do:
if { [test_compiler_info "some_pattern"] } { # Defined true/false one way... } else { # Defined true/false another way... }
But given the current true/false definitions have been in place since at least 1999, I suspect this will not be needed any time soon.
Given that the definitions of true/false are so simple, right now my suggestion is just to define them in each test script that wants them (there's not that many). If we ever did need more complex logic then we can always add a function in gdb.exp that sets up these globals, but that seems overkill for now.
There should be no change in what is tested after this commit.
"10:40pm. Got up 15m. Yeah, I know that I am slacking a bit too much. Got some good replies and I've been going over them.
///
It's theoretically possible to do these tasks, but it would require quite a few changes to the training scheme and would probably be quite a lot of work.
Luckily for style transfer, though, the unconditional model already suffices!
The input to the model is an image of pure Gaussian noise: im = torch.randn(1, 3, 256, 256)
However, you can also just input an image and scale it to look like Gaussian noise:
im = to_tensor(resize(PIL.Image.open(filename), 256)).unsqueeze(0)
im -= im.mean()
im /= im.std()
To add a content-style tradeoff you can add extra noise to the image:
style_weight = 5
im = (im + torch.randn_like(im) * style_weight) / (1 + style_weight)
You can see an example where I've applied the above to a video here.
///
I didn't expect to get an answer in the repo itself.
https://twitter.com/wavefunk_/status/1513856457455898629
Here is the link.
11am. Also regarding the issue of conditioning.
If you read my article above, I try to touch on the topic of text based generation and the changes that are required to the model for the same. Though there is no code in the article itself, I would recommend you to check the official implementations for GLIDE or the GLID-3 to get more understanding of how the conditioning works.
Just like position, it seems they project the text vector onto the hidden layers.
https://github.com/openai/glide-text2im https://github.com/Jack000/glid-3
11:15am. Had to take a short break. Let me clone the glide repo. It does not seem to be that big.
I really should not be spending my energy on this, but my curiosity is killing me.
11:50am. I did a basic skim. In order to understand it further I should download the models and walk myself through the code with the debugger. I am not really interested in that. I do not have enough incentive to do that just yet. This model is quite complicated. It is not straightforward like the HuggingFace article.
OpenAI puts a lot of effort finding new things, but not enough effort simplifying them. If I wait a year or two, I bet I'd be able to find something much simpler that works just as well.
Nevermind this for now. My curiosity is sated.
12:05pm. Right now I am reading Legendary Mechanic. Let me have breakfast here. After that I'll really focus on art."
MAINT: add SCIPY_USE_PROPACK
env variable (#16361)
-
this is effectively a forward port and modernization of the release branch
PROPACK
shims that were added in gh-15432; in short,PROPACK
+ Windows + some linalg backends was causing all sorts of trouble, and this has never been resolved -
I've switched to
SCIPY_USE_PROPACK
instead ofUSE_PROPACK
for the opt-in, since this was requested, though the change between release branches may cause a little confusion (another release note adjustment to add maybe) -
I think the issues are painful to reproduce; for my part, I did the following just to check the proper skipping/accounting of tests:
SCIPY_USE_PROPACK=1 python dev.py -j 20 -t scipy/sparse/linalg
932 passed, 172 skipped, 8 xfailed in 115.57s (0:01:55)
python dev.py -j 20 -t scipy/sparse/linalg
787 passed, 317 skipped, 8 xfailed in 114.80s (0:01:54)
- why am I doing this now? well, to be frank the process of manually
backporting this for each release is error-prone, and may cause
additional confusion/debate, which I'd like to avoid. Besides, if it
is broken in
main
we may as well have the shims there as well. I would point out that you may want to addSCIPY_USE_PROPACK
to 1 or 2 jobs in CI? The other reason is that if usage ofPROPACK
spreads, I don't want to be manually applying more skips/shims on each release (which I already had to do here with two new tests it seems)
gdb: native target invalid architecture detection
If GDB is asked to start a new inferior, or attach to an existing process, using a binary file for an architecture that does not match the current native target, then, currently, GDB will assert. Here's an example session using current HEAD of master with GDB built for an x86-64 GNU/Linux native target, the binary being used is a RISC-V ELF:
$ ./gdb/gdb -q --data-directory ./gdb/data-directory/ (gdb) file /tmp/hello.rv32imc.x Reading symbols from /tmp/hello.rv32imc.x... (gdb) start Temporary breakpoint 1 at 0x101b2: file hello.rv32.c, line 23. Starting program: /tmp/hello.rv32imc.x ../../src/gdb/gdbarch.h:166: internal-error: gdbarch_tdep: Assertion `dynamic_cast<TDepType *> (tdep) != nullptr' failed. A problem internal to GDB has been detected, further debugging may prove unreliable.
The same error is encountered if, instead of starting a new inferior, the user tries to attach to an x86-64 process with a RISC-V binary set as the current executable.
These errors are not specific to the x86-64/RISC-V pairing I'm using here, any attempt to use a binary for one architecture with a native target of a different architecture will result in a similar error.
Clearly, attempting to use this cross-architecture combination is a user error, but I think GDB should do better than an assert; ideally a nice error should be printed.
The problem we run into is that, when the user starts a new inferior, or attaches to an inferior, the inferior stops. At this point GDB attempts to handle the stop, and this involves reading registers from the inferior.
These register reads end up being done through the native target, so in the example above, we end up in the amd64_supply_fxsave function. However, these functions need a gdbarch. The gdbarch is fetched from the register set, which was constructed using the gdbarch from the binary currently in use. And so we end up in amd64_supply_fxsave using a RISC-V gdbarch.
When we call:
i386_gdbarch_tdep *tdep = gdbarch_tdep<i386_gdbarch_tdep> (gdbarch);
this will assert as the gdbarch_tdep data within the RISC-V gdbarch is of the type riscv_gdbarch_tdep not i386_gdbarch_tdep.
The solution I propose in this commit is to add a new target_ops method supports_architecture_p. This method will return true if a target can safely be used with a specific architecture, otherwise, the method returns false.
I imagine that a result of true from this method doesn't guarantee that GDB can start an inferior of a given architecture, it just means that GDB will not crash if such an attempt is made. A result of false is a hard stop; attempting to use this target with this architecture is not supported, and may cause GDB to crash.
This distinction is important I think for things like remote targets, and possibly simulator targets. We might imagine that GDB can ask a remote (or simulator) to start with a particular executable, and the target might still refuse for some reason. But my thinking is that these refusals should be well handled (i.e. GDB should give a user friendly error), rather than crashing, as is the case with the native targets.
For example, if I start gdbserver on an x86-64 machine like this:
gdbserver --multi :54321
Then use GDB to try and load a RISC-V binary, like this:
$ ./gdb/gdb -q --data-directory ./gdb/data-directory/ (gdb) file /tmp/hello.rv32imc.x Reading symbols from /tmp/hello.rv32imc.x... (gdb) target extended-remote :54321 Remote debugging using :54321 (gdb) run Starting program: /tmp/hello.rv32imc.x Running the default executable on the remote target failed; try "set remote exec-file"? (gdb)
Though the error is not very helpful in diagnosing the problem, we can see that GDB has not crashed, but has given the user an error.
And so, the supports_architecture_p method is created to return true by default, then I override this in inf_child_target, where I compare the architecture in question with the default_bfd_arch.
Finally, I've added calls to supports_architecture_p for the run (which covers run, start, starti) and attach commands.
This leaves just one question, what about native targets that support multiple architectures?
These targets can be split into two groups. First, we have targets like x86-64, which also supports i386 binaries. This case is easy to handle, as far as BFD is concerned there is only one architecture, bfd_arch_i386, and we then use machine types to split this architecture into x86-64 and i386 (and others). As the new supports_architecture_p function only checks the bfd architecture, then there is nothing additional needed for this case.
The second group of multi-architecture targets requires more work. The only targets that I'm aware of that fall into this group are the rs6000-aix-nat.c, ppc-*-nat.c targets, and the aarch64-linux-nat.c target.
The first group (rs600/ppc) support bfd_arch_rs6000 and bfd_arch_powerpc, while the second (aarch64) supports bfd_arch_arm and bfd_arch_aarch64.
To deal with these targets I have overridden the supports_architecture_p function in each of the separate target files, these overrides check both of the supported architectures.
One final note, in the rs6000/ppc case, the FreeBSD target supports both architectures, and so we override supports_architecture_p. In contrast, the aarch64_fbsd_nat_target target does not (yet) support bfd_arch_arm, and so there is no supports_architecture_p here. This can always be added later if/when support is added.
You will notice a lack of tests for this change. I'm not sure of a good way that I can build a binary for a different architecture as part of a test, but if anyone has any ideas then I'll be happy to add a test here. The gdb.base/multi-arch.exp test exists, which for AArch64 will test compiling and running something as both AArch64 and ARM, but this doesn't cover the error case, just that the overridden supports_architecture_p works in that case.
Fixes Massive Radio Overtime, Implements a Spatial Grid System for Faster Searching Over Areas (#61422)
a month or two ago i realized that on master the reason why get_hearers_in_view() overtimes so much (ie one of our highest overtiming procs at highpop) is because when you transmit a radio signal over the common channel, it can take ~20 MILLISECONDS, which isnt good when 1. player verbs and commands usually execute after SendMaps processes for that tick, meaning they can execute AFTER the tick was supposed to start if master is overloaded and theres a lot of maptick 2. each of our server ticks are only 50 ms, so i started on optimizing this.
the main optimization was SSspatial_grid which allows searching through 15x15 spatial_grid_cell datums (one set for each z level) far faster than iterating over movables in view() to look for what you want. now all hearing sensitive movables in the 5x5 areas associated with each spatial_grid_cell datum are stored in the datum (so are client mobs). when you search for one of the stored "types" (hearable or client mob) in a radius around a center, it just needs to
iterate over the cell datums in range
add the content type you want from the datums to a list
subtract contents that arent in range, then contents not in line of sight
return the list
from benchmarks, this makes short range searches like what is used with radio code (it goes over every radio connected to a radio channel that can hear the signal then calls get_hearers_in_view() to search in the radios canhear_range which is at most 3) about 3-10 times faster depending on workload. the line of sight algorithm scales well with range but not very well if it has to check LOS to > 100 objects, which seems incredibly rare for this workload, the largest range any radio in the game searches through is only 3 tiles
the second optimization is to enforce complex setter vars for radios that removes them from the global radio list if they couldnt actually receive any radio transmissions from a given frequency in the first place.
the third optimization i did was massively reduce the number of hearables on the station by making hologram projectors not hear if dont have an active call/anything that would make them need hearing. so one of hte most common non player hearables that require view iteration to find is crossed out.
also implements a variation of an idea oranges had on how to speed up get_hearers_in_view() now that ive realized that view() cant be replicated by a raycasting algorithm. it distributes pregenerated abstract /mob/oranges_ear instances to all hearables in range such that theres at max one per turf and then iterates through only those mobs to take advantage of type-specific view() optimizations and just adds up the references in each one to create the list of hearing atoms, then puts the oranges_ear mobs back into nullspace. this is about 2x as fast as the get_hearers_in_view() on master
holy FUCK its fast. like really fucking fast. the only costly part of the radio transmission pipeline i dont touch is mob/living/Hear() which takes ~100 microseconds on live but searching through every radio in the world with get_hearers_in_radio_ranges() -> get_hearers_in_view() is much faster, as well as the filtering radios step
the spatial grid searching proc is about 36 microseconds/call at 10 range and 16 microseconds at 3 range in the captains office (relatively many hearables in view), the new get_hearers_in_view() was 4.16 times faster than get_hearers_in_view_old() at 10 range and 4.59 times faster at 3 range
SSspatial_grid could be used for a lot more things other than just radio and say code, i just didnt implement it. for example since the cells are datums you could get all cells in a radius then register for new objects entering them then activate when a player enters your radius. this is something that would require either very expensive view() calls or iterating over every player in the global list and calling get_dist() on them which isnt that expensive but is still worse than it needs to be
on normal get_hearers_in_view cost the new version that uses /mob/oranges_ear instances is about 2x faster than the old version, especially since the number of hearing sensitive movables has been brought down dramatically.
with get_hearers_in_view_oranges_ear() being the benchmark proc that implements this system and get_hearers_in_view() being a slightly optimized version of the version we have on master, get_hearers_in_view_as() being a more optimized version of the one we have on master, and get_hearers_in_LOS() being the raycasting version currently only used for radios because it cant replicate view()'s behavior perfectly.
(cherry picked from commit d005d76f0bd201060b6ee515678a4b6950d9f0eb)
Update Comments and Adjusts Incorrect Variables for Map Defines and Map Config (#66540)
Hey there,
These comments were really showing their age, and they gave the false impression that nothing had changed (there was a fucking City of Cogs mention in this comment!). I rewrote a bit of that, and included a blurb about using the in-game verb for Z-Levels so people don't get the wrong impressions of this quick-reference comment (they always do).
I also snooped around map_config.dm and I found some irregularities and rewrote the comments there to be a bit more readable (in my opinion). Do tell me if I'm a cringe bastard for writing what I did.
Also, we were using the Box whiteship/emergency shuttle if we were missing the MetaStation JSON. Whoops, let's make sure that's fixed.
People won't have to wander in #coding-general/#mapping-general asking "WHAT Z-LEVEL IS X ON???". It's now here for quick reference, as well as a long-winded section on why you shouldn't trust said quick reference.
haha what if we fundamentally didn't understand inheritance wouldn't that be fucking hilarious
Improve UI on mobile (#19546)
Start making the mobile experience not painful and be actually usable. This contains a few smaller changes to enhance this experience.
- Submit buttons on the review forms aren't columns anymore and are now allowed to be displayed on one row.
- The label/milestone & New Issue buttons were given each own row even tough, there's enough place to do it one the same row. This commit fixes that.
- The issues+Pull tab on repo's has a third item besides the label/milestone & New Issue buttons, the search bar. On desktop there's enough place to do this on one row, for mobile it isn't, currently it was using for each item a new row. This commits fixes that by only giving the searchbar a new row and have the other two buttons on the same row.
- The notification table will now be show a scrollbar instead of overflow.
- The repo buttons(Watch, Star, Fork) on mobile were showing quite big and the SVG wasn't even displayed on the same line, if the count of those numbers were too high it would even overflow. This commit removes the SVG, as there isn't any place to show them on the same row and allows them to have a new row if the counts of those buttons are high.
- The admin page can show you a lot of interesting information, on mobile the System Status + Configuration weren't properly displayed as the margin's were too high. This commit fixes that by reducing the margin to a number that makes sense on mobile.
- Fixes to not overflow the tables but instead force them to be scrollable.
- When viewing a issue or pull request, the comments aren't full-width but instead 80% and aligned to right, on mobile this is a annoyance as there isn't much width to begin with. This commits fixes that by forcing full-width and removing the avatars on the left side and instead including them inline in the comment header.
I've been saying for the past week that we basically already shipped soft focus, but not technically. I'd like to clarify what I meant by that, because it's confusing. Let's start from the beginning-ish.
I started a document on April 26th describing an underlying cause for an entire family of problems that I was seeing in the devtools UI. Around that same time, we were also seeing a lot of crazy behavior with regions loading and unloading (I'm gonna come back to this). The big takeaway from that document was that we needed methods for focusing at slices more specific than regions. We did a lot of work to support this.
- We added range parameters to backend endpoints, so that front-end requests could send the range they were interested in, rather than always addressing the entirety of all loaded regions
- We reworked how we run, monitor, and store the results of analyses generally
- We built two completely new backend endpoints for converting from times to TimeStampedPoints
- We built a thunk which observed the focus mode, and decided when it should attempt to reload some resources (starting with console messages, and then analyses) based on the newly chosen window and previous fetches
I'd like to pause at that last one. In many ways, that was the ultimate goal at the time I wrote that document. It solves all "edge-piece" and "overflow" problems, and while we can go even farther in the direction of efficiency, I'm quite happy with the soundness of the foundation we've laid, and soundness was the motivator for this project.
We shipped the first version of everything up above one week ago, and sent it out with the newsletter. However, remember how I said that back when I was first scoping this project we were having a lot of trouble with loading and unloading regions? Well, one of the things that I realized early on was that if we correctly handled edge-pieces, it no longer mattered how big the edge pieces were. And if it does not matter how big the edge pieces are, then there's no reason to forcibly unload parts of the recording in order to exclude them. Back when unloading and loading were causing massive headaches both for us and for users, this seemed like a huge win, so I made it the final goal of Soft Focus - be able to move your focus window around, without ever having to unload a region.
My first shot at it was kind of wonky (you'll see just how wonky in a second), and I didn't love it. And while we were busy getting everything ready so we could turn this flag on, the rest of the team shipped Turbo Replay, and all of a sudden, it did not matter that much if we forcibly loaded or unloaded regions. It's still a nice performance win I think, for users who are moving their focus window around a lot, and more importantly, we corrected a ton of soundness problems on the way here.
OK, so back to this change.
The final solution is probably best explained by this comment I wrote earlier today to explain a type (if you've been deep in the weeds on this stuff already a lot of this will sound familiar):
You might be wondering why there are multiple
begin
s andend
s for a single FocusRegion. Well, this is an artifact of the difference between world-time, which is essentially continuous (and thus, linearly interpolable), and execution time, which is both discrete (in the sense that even though there may be many points "between" points 100 and 200, there may not be any points that are valid for operations like Pausing) and irregularly spaced (the recording might have many valid points between the times 100-150ms, but no valid points between 150-500ms).It is very convenient to accept user input in the form of times. The user can click somewhere on the timeline (which is really just a visual interpretation of the continuous, evenly-spaced number line underlying it), and we can try and use that to filter things down to that time.
And that is where the trouble starts.
In order to accurately filter resources that are in memory, or to specify a range to a backend endpoint which accepts ranges, we need to bound our range by points, not just times. But the user did not give us a point, they only gave us a time. So what should we do? Well, we try to convert from our time to a point. However, because points are discrete and unevenly spaced, this is not an exact translation, it introduces a small amount of error (usually on the order of <100ms, but occasionally more). Also, to be as precise in the conversion as possible we need to hit a backend endpoint (getPointsBoundingTime), which is not horribly expensive, but is significantly more expensive than a simple lookup, for instance.
All of this causes problems, especially when the user is dragging around a focus window. This is because:
- The little errors start to add up. If you "snap" the continuous values of the timeline to the discrete values of the pointline, you introduce a little bit of error. Then, maybe the user drags the line another pixel, and you introduce another little error, and so on and so forth and, well screens have many pixels so this can get out of hand quickly.
- A difference of less than 100ms might not be super noticeable on the timeline, but as the errors get larger (or if the recording is quite short, for instance) it will start to be clear to the user that we are not actually putting them at quite the time that they asked for. And, well, we have very good reasons for doing that, but they are very hard to explain (for instance, we might have to subject them to reading something like this comment).
- If we try and do an exact lookup every time the focus window changes while selecting it, we will end up sending a ton of protocol messages.
The solution I have come up with so far is that we store two windows. One window is used for display. It is what we show on timelines, and it is also what we use as the inputs to our time -> point transformation after the user has chosen their focus window.
The other time is a proper TimeStampedPointRange, which we use to do things like filter resources by point and interact with backend endpoints which accept ranges.
With the above solution, we can ship Soft Focus (in the sense that there
does not need to be any forced unloading) without degrading the
experience of picking a focus window or having to expose the messiness
of the above to the user. Next, will be actually utilizing
Session.getPointsBoundingTime
to be more specific in our mapping, but
even now, we are pretty darn specific most of the time, so I wanted to
get this in rather than waiting for that optimization piece to be done.
As for the changes this actually makes:
- Deletes the
Soft Focus
flag from experimental settings - Changes
startTimeForFocusRegion
todisplayedStartForFocusRegion
- Likewise for
endTimeForFocusRegion
todisplayedEndForFocusRegion
- Displays the
displayed
time values when choosing a window, rather than the mapped-points' time values
That's it! Not a ton of changes, but building on a much larger set over the past 6 weeks 🏗️
Resiliency (#405)
-
Improve resiliency and speed
-
fuck you nancy
-
fix VPA manifest
ECDSA account system. Sorry.
Kindelia now uses the same accounts as Ethereum. Run statements can now be signed:
run { code_here } sign { signature_here }
The payload is the serialization of the block. If there is no signature, or if ECDSA.verify fails, the statement will run with the subject set to "0". If it checks, the statement runs as usual, except the subject, instead of "0", will store the last 120 bits of the corresponding Ethereum address. This allows functions to address users by u120 (20-letter) names.
At this point, I can already hear Kelvin yelling:
"Why... why would you, betray us all like that? How dare you ruin this magnificently pure, strong network with such a complex, convoluted, fragile cryptographic primitive that is about to be broken into pieces, either by the rise of quantum computers, or even classical solutions? We haven't even had a chance to talk this over! Can't you think of the future children? Can't you think of the future cats? Why would you put so much effort into designing a network capable of lasting centuries... only to ruin it overnight?
For him, for future users, for myself, I can only admit: no network can last a hundred years if it never launches, to begin with. Truth is, it is 2022, we won't get any kind of reasonable funding if we don't present a working network with great apps that people can use to buy things and do stuff, today. Lamport signatures are so huge they don't fit in an entire block, WOTS is extremelly inefficient, implementing ECDSA on HVM will take months, which we don't have. All that to end up with inefficient, error-prone, roll-your-own-crypto account systems on HVM. And until then, there won't be many exciting apps running on Kindelia. Perhaps anonymous boards, an /r/place clone, things like that, but not anything that requires authentication. We can avoid all this pain with one line:
external crate secp256k1;
Sounds tempting, doesn't it? We aren't in a bargaining position. If it helps, the network will NOT be destroyed by a sudden rise of Quantum computers, unlike Bitcoin and Ethereum. After all, we don't rely on ECDSA. We just use it on the short term. We can simply demand users to assign hash signatures for backup, allowing us to just disable ECDSA and let the network recover after such event. All in all, if Kindelia's most unjustified, fragile aspect is the use of the most reviewed signature standard in existence, which is also used by every other project, I guess we're in good shoes.
object-file.c: do fsync() and close() before post-write die()
Change write_loose_object() to do an fsync() and close() before the oideq() sanity check at the end. This change re-joins code that was split up by the die() sanity check added in 748af44c63e (sha1_file: be paranoid when creating loose objects, 2010-02-21).
I don't think that this change matters in itself, if we called die() it was possible that our data wouldn't fully make it to disk, but in any case we were writing data that we'd consider corrupted. It's possible that a subsequent "git fsck" will be less confused now.
The real reason to make this change is that in a subsequent commit we'll split this code in write_loose_object() into a utility function, all its callers will want the preceding sanity checks, but not the "oideq" check. By moving the close_loose_object() earlier it'll be easier to reason about the introduction of the utility function.
Signed-off-by: Ævar Arnfjörð Bjarmason [email protected]
Create JAVA school projet: Linux file system management
Instructions translated automatically from the french version: NB:in addition of the shell line command I have created a graphic interface which could replace any shell line command..
Simplified file system management The goal of this project is to implement in JAVA a file system simplified, allowing to store folders (directories) and files within of a tree structure. The aim is to ensure that, through the use of some commands reproducing system commands usual, a user can modify, consult and browse this tree structure of files. We will limit ourselves here to the essential functionalities: we will therefore only manage a very limited number of commands, and we will not associate access rights to folders/files. We also want to make it possible to create tree structures of files from predefined examples, which will be provided in the form of text files to be recovered, which will respect a certain format. This document aims to guide you in the different stages of the realization of this project, and should be read carefully. Project display We are interested in this project in a so-called "tree" file system (that is to say in which the files are organized within a tree structure) of type Linux (for example). Using a tree structure to storing files makes it possible to represent a set of folders containing files, some of these folders being included in others. In such structure, there is an "origin" folder, called root, from which all the other files derive. (By convention, the name of this folder will always be "" (empty string).) In other words, any folder is either included in this root folder, either included in a folder which is itself included in another folder, itself included in another folder, etc., itself included in the folder root. The include relationship (of one folder or file within another folder) can be modeled using a "father-child" relationship in a tree (tree), each node of which represents a folder or file. This tree is rooted at the node representing the root folder, and the only node fatherless is therefore this root. The first step consists in implementing the operations related to the management of such a tree structure, making it possible to store files. The second step is to allow the construction of a predefined file system
from the information contained in a text file respecting a certain format, which will be detailed in due course. Finally, the third step consists in allowing a user to interact, through lines of commands, with this filesystem (as if using a terminal to enter Linux commands), using standard instructions (such as ls, cd, less, mkdir, pwd, rm, etc). It should be noted that, in order not to complicate management of this file system, we will not associate access rights (in reading, writing or executing) to the different folders/files. Step 1: the FileTree class We first propose to write a TreeFiles class, which is intended to represent a tree structure of files (or a node of a such tree). We will use for this a representation of the trees using references (or "chains") close to the one seen in progress, but using more information than in this representation, to allow easier browsing of all the folders/files. Thus, each instance/object of this class will represent a node of the tree structure, and will contain the following 8 instance variables (we recall that the correct usage in JAVA is to declare these variables as private):
- Four FileTree variables, corresponding to the nodes of the tree structure: the father of this node, its first child (child the further left) and its left and right siblings. It should be noted that the folders/files represented by the children of a node will be stored by alphabetical order, left to right.
- A String variable containing the name of the folder/file represented by this node. We will assume, to avoid any complication unnecessary, that the name of a folder/file cannot contain any spaces.
- A Boolean variable indicating whether this node represents a file (if it is true) or a folder (if it is false).
- A String type variable in which the associated content will be stored to this node, if the latter represents a file. In case this node represents a folder, this variable will be equal to null.
- An integer variable whose value corresponds to the size (in bytes) of the folder/file corresponding to this node. Of course, the size of a file is equal to the number of characters of its contents (obtained via the length method of the String class), and the size of a folder is the sum of the sizes of the items (folders and files) it contains.
This class will also contain the read accessors that will be necessary (in particular to access the parent node, the nature of a node -folder or file-, and to the content of a file), and one (or more) constructor(s). Finally, this class will contain (at least) the following 5 methods, which will allow you to modify and browse the considered file tree: • Method 1: it allows you to add as a child of a given node a another node (or a piece of tree structure) passed as a parameter. • Method 2: it allows you to delete a node from the tree structure. • Method 3: it returns, in the form of a String type variable, all the information (a letter equal to 'd' as folder or 'f' as file, as well as the name and the size in bytes) concerning the folders and files included in the folder represented by a node. • Method 4: it returns, in the form of a String type variable, the names of the nodes located on the branch connecting the root to a node. • Method 5: it returns a reference to the node whose name (relative path) is passed as a parameter via a String type variable. Here are some guidelines for implementing these methods: • Method 1. First, we must make node n1 on which the method is called the father of node n2 passed as a parameter. It is then necessary to browse the children of node n1: if it has no children, then node n2 becomes its only child, but otherwise it must compare the name of each child with the name of node n2 to add, to find the correct location for node n2. You can use the compareToIgnoreCase method of the String class, which takes a String type variable as a parameter, and returns an integer > 0 if the variable of type String on which the method is called is located after (in alphabetical order) that passed as a parameter, and an integer ≤ 0 otherwise. Once the correct location has been determined, all you have to do is insert n2 (in updating, if necessary, the first child of n1, and the left siblings and/or right of n2 and its new left and right siblings, if they exist). Finally, don't forget to update the size of node n1 (which has been modified following the addition of the node n2), then of its father, then of the his father's father, etc., until he reaches the root.
We must first differentiate the case where the node n1 to be deleted (i.e. the node on which the method is called) is the first child of his father (it is then necessary to update this information, since his right brother becomes the first child) of the other case (where it suffices to update the right sibling of the left sibling of n1). In both cases, it is also necessary to update the left sibling of the right sibling of n1 (if the latter exists). Finally, as in the method previous one, we must not forget to update the size of the father of the node n1 (which was modified following the deletion of node n1), then from his father's father, etc., until he reaches the root. • Method 3. It suffices to examine, from left to right, the set of child of the node on which the method is called, using a traversal of list. For each child, we will update the variable of type String to return by concatenating its information with that of the previous threads. It should be noted that the string thus constructed will be used for the implementation of the ls command (see step 3 for details). • Method 4. It is necessary to traverse the nodes being between the root and the node on which the method is called, starting with this last, and then going back to his father, then his father's father, etc., until reaching the root. As in the previous method, in each node traversed, we will update the String type variable to return by concatenating its name with those of the previous nodes. It should be noted that the string thus constructed will be used for the implementation of the pwd command (see step 3 for details). • Method 5. The String type variable passed as a parameter is either ".." (which means "go back to the parent folder"), i.e. the name of a folders included in the folder represented by the node on which the method is called. If its value is "..", then we return a reference to the parent of the node on which the method is called. Otherwise, it is necessary to browse the child of the node on which the method is called, until finding the one whose name corresponds to the name passed as a parameter: we then return a reference to this child. In addition, one must be careful to take take into account the case where the specified name does not correspond to a folder existing, which can generate errors during execution. Step 2: Building an Initial File Tree from a text file The method described here can be integrated into your main class. His objective is to read the information contained in the provided text file as a parameter, and use them to create the initial file tree with which the user will be able to interact. Here are some pointers concerning the format of the file to read: • The file begins with the root keyword and ends with the keyword end. At first, the "current" folder is the "root" folder. • Some lines contain comments at the end, separated from the rest of the line by a space then a "%" character. • Any line, except the first and last, that does not match to the contents of a file, begins with a block of one or more characters "" (without spaces), and then corresponds either to the description of a new folder/file, or when closing a folder. • A line that contains the description of a new name folder "a_name" has the following format (ignoring the ""): a_name of %any comments This folder is included in the current "current" folder. Furthermore, it becomes the new "current" folder until further notice (i.e. until it is closed, or another line containing the description of a new folder is encountered). • A line that contains the description of a new name file "a_name" has the following format (ignoring the ""): a_name f %possible comments In this case, the line immediately following this line represents at each time the text contained in this file. Also, this file is included in the current folder. • A line corresponding to the closing of a file contains a single word (ignoring "" or comments): the end word. When such a line is encountered, the current folder is closed, and the folder represented by its parent (i.e. the folder that contains it) becomes the new current folder. Here is an example of such a provided file (let's call it "toto.txt"): root
- a_file f this is the content of this file
- sd1 d %this is a comment: the sd1 folder has the root as its father ** another_file f %this other file is included in sd1 this is the content of this other file
** sd2 d %sd2 folder is included in sd1 *** a_3rd_file f %this file is included in sd2 this is the content of this 3rd file ** end %comment: close sd2, and return to sd1 ** a_last_file f %this file is also included in sd1 this is the content of ___ this last file
- end % we close sd1, and we come back to the root
- sd3 d %this folder is fathered by root, and contains no files
- end % we close sd3, and we come back to the root end This file corresponds to a tree structure of files in which: • The root contains a file (a_file) and 2 folders (sd1 and sd3). • The sd1 folder contains 2 files (an_other_file and an_last_file) and a folder (sd2). • Finally, the sd2 folder contains a file (an_3th_file). To read data from a file, you can use a variable from the BufferedReader class. Here's how to initialize a player variable this class, to access the contents of the file named "toto.txt": BufferedReader reader = new BufferedReader(new FileReader("toto.txt")); By calling the readLine method (which returns a String type value) on this variable, we will read the content of the following line of the file named "toto.txt" (the reading of a file being sequential). Thus, in the case of the file described above, after the first call to this method, the variable line will contain the "root" value. After a second call to this method, the line variable will contain the value "* a_file f": String line = reader.readLine(); //line contains "root" line = reader.readLine(); //line contains "* a_file f" The construction of a BufferedReader type variable is likely to throw an exception (if the file named "toto.txt" does not exist, for example), and therefore it would be better to handle all these instructions in a block try{...} catch(Exception e){...}. Finally, after use, this variable reader must be closed, using the instruction reader.close();. To read the "toto.txt" file, it is also possible to use the class To scan. We then read each line using the nextLine() method: Scanner reader2 = new Scanner(new File("toto.txt")); //creation String line = reader2.nextLine(); //first call = first line read line = reader2.nextLine(); //second call = second line read
Once the information of a line associated with a folder/file retrieved, they must be stored in a variable of the class TreeFiles. For this, we remind you of the existence of: • The split() method of the String class, which allows you to "split" the variable of type String on which the method is called according to of the separator passed as a parameter (for example " "), and which stores the String type variables resulting from this splitting in an array. • The charAt() method of the String class, which returns the character that is found, in the variable of type String on which the method is called, to the index (integer) passed as parameter. (For more information on these methods, refer to the description of TP 3.) Step 3: Implementing Basic System Commands This step can be done in a method called in the main, or well directly in the hand. The objective here is to have the user enter Linux-like system commands and then executing them, which consists of basically calling the methods defined in step 1. The 8 commands (9 in reality) to be considered in the context of this project are the following:
- ls: this command does not expect any arguments, and has the effect of list all the folders and files (names and sizes) included in the current folder (in alphabetical order, since we remind you that they are stored in that order). Folders will appear preceded by the letter 'd' (like folder). So, if the current folder is the sd1 folder of the example given in step 2, the ls command will output this:
ls sd2 42 another_file 46 a_last_file 51
- cd: this command takes as argument a name (relative path) of a other folder included in the current folder (or the string "..", which allows you to go back to the parent folder), and has the effect that this other folder becomes the new current folder. Thus, if the current folder is the sd2 folder of the example given in step 2, the 3 commands subsequent sequences will make sd3 the new current folder:
cd.. cd.. cd sd3
If the specified name is not that of an existing folder, the command will only have the effect of displaying a message indicating this to the user. 3. mkdir: this command takes a folder name as an argument (without spaces), and creates a new blank folder, which has this name and is included in the current folder. So, if the current folder is the folder sd3 from the example given in step 2, the mkdir sd4 command will create a new sd4 folder, initially empty, and included in the sd3 folder. 4. mkfile: this command takes a file name as an argument (without spaces), and creates a new file, which has this name and is included in the current folder. The user is then prompted to enter the content of this file (pressing the "return" key ends the entry): if he wishes enter a content on several lines, he just needs to enter 3 characters Consecutive "" at the end of each line of the file. 5. less: this command takes as an argument the name of a file included in the current folder, and displays its content (remember that the "__" appearing in the file must be replaced by newlines). So if the current folder is the sd1 folder from the example given in step 2, the successive mkfile commands new_file (which creates the file new_file in the folder current) and less new_file will produce the following output:
mkfile new_file File content? Here is the text entered by the user less new_file Here is the entered text by user If no file in the current folder bears the name passed in argument, the less command will only display a message notifying the user.
- pwd: this command does not expect any argument, and causes the display the absolute path associated with the current folder. Thus, if the current folder is the sd3 folder of the example given in step 2, the successive commands cd .., cd sd1, cd sd2, pwd, will produce the following display:
cd.. cd sd1 cdsd2 pwd /sd1/sd2
- rm: this command takes as argument the name of a folder or a file included in the current folder, and removes it (it and everything in it contains, if it is a folder). So, if the current folder is the folder sd1 of the example given in step 2, the successive commands rm sd2 and ls will produce the following output:
rm sd2 ls another_file 46 a_last_file 51 In case no folder or file included in the current folder does not bear the name passed as an argument, the rm command will only have effect of displaying a message indicating this to the user.
- quit and exit: these commands, which do not expect any arguments and both produce the same effect, end the current session. As part of this project, which only simulates the behavior of Linux commands, so they terminate program execution. Final touches To complete the project, it remains to write the main program. This last will retrieve a single parameter when calling the program: the name of the file describing the tree structure of files to use. If nothing happened as a parameter, then the initial file tree will be empty (i.e. will contain only one node, representing the root folder). It will then be necessary, as this file is read (step 2), build this tree structure using the methods of the class described at step 1. The rest of the main program boils down to managing a variable of FileTree type which will contain a reference to the node representing the current folder (initially it will be the root), as well as a loop in which will display a summary command prompt (>) and read the command entered by the user (if a command produces a display, then we will skip a line at the end of this display), and which will run as long as the user does not enter a command terminating the program (quit or exit). If he enters any other valid command, then the operations associated with it will be executed (step 3). If he enters a command invalid, then we will simply display a message indicating it
modpost: file2alias: go back to simple devtable lookup
commit ec91e78d378cc5d4b43805a1227d8e04e5dfa17d upstream.
Commit e49ce14150c6 ("modpost: use linker section to generate table.") was not so cool as we had expected first; it ended up with ugly section hacks when commit dd2a3acaecd7 ("mod/file2alias: make modpost compile on darwin again") came in.
Given a certain degree of unknowledge about the link stage of host programs, I really want to see simple, stupid table lookup so that this works in the same way regardless of the underlying executable format.
Signed-off-by: Masahiro Yamada [email protected] Acked-by: Mathieu Malaterre [email protected] [nc: Omit rpmsg, sdw, fslmc, tbsvc, and typec as they don't exist here Add of to avoid backporting two larger patches] Signed-off-by: Nathan Chancellor [email protected] Signed-off-by: Sasha Levin [email protected] Signed-off-by: Kevin F. Haggerty [email protected] Change-Id: Ic632eaa7777338109f80c76535e67917f5b9761c
nicknames for packs! wait will this even work (#369)
-
nicknames for packs! wait will this even work
-
nacrt i dont know what the fuck you want from me
BUG: Fix that reducelikes honour out always (and live int he future)
Reducelikes should have lived in the future where the out
dtype
is correctly honoured always and used as one of the inputs.
However, when legacy fallback occurs, this leads to problems because
the legacy code path has 0-D fallbacks.
There are two probable solutions to this:
- Live with weird value-based stuff here even though it was never actually better especially for reducelikes. (enforce value-based promotion)
- Avoid value based promotion completely.
This does the second one, using a terrible hack by just mutating
the dimension of out
to tell the resolvers that value-based logic
cannot be used.
Is that hack safe? Yes, so long nobody has super-crazy custom type resolvers (the only one I know is pyerfa and they are fine, PyGEOS I think has no custom type resolver). It also relies on the GIL of course, but...
The future? We need to ditch this value-based stuff, do annoying acrobatics with dynamically created DType classes, or something similar (so ditching seems best, it is topping my TODO list currently).
Testing this is tricky, running the test:
python runtests.py -t numpy/core/tests/test_ufunc.py::TestUfunc::test_reducelike_out_promotes
triggers it, but because reducelikes do not enforce value-based promotion the failure can be "hidden" (which is why the test succeeds in a full test run).
Closes gh-20739
implement entry score
but my god, what a horrible clusterfuck this is
using ranges in where()
is extra fancy tho, love it
Reduce xopp svg sizes
I had to rewrite a bunch of stuff, because export from Xournal++ directly to SVG is broken atm. This means that we're converting to PDF, then to SVG using Inkscape, then do some regex magic to remove the background and save it, then call Inkscape to crop to the drawing and finaly use svgcleaner to reduce the size. I want to die.
Work to put invoice into billingService
-Alot was done. I couldn't stop cause alot of it was tied together. And to be honest I was to brain dead tired to actually make some good steps in what I was doing. In fact alot of the work done here will need to be refactored because its all just poor quality code. It hasn't even been tested. So alot rides on the next commit. But I will attempt to put into words what I tried to do here
-Installed the Microsoft.Interlop package so that I can read and write bookmarks on Word documents. I made a .docx that has bookmarks of texts I want to replace automatically -The logic for the text replacement is in billingServices. I'm considering making a WordService because it is alot -Created a DateTime extension class so change the date from different formats and even from orderNumbers -In firebasesContext I realise that Yewo made some Changes to GetData. Its a generic method there. I comment how that's a bad idea because you are forced to make useless types. The alternative I provide works better. -BilledUserDto was modified to have the sales from the month and the username. Consider Refactoring to just use the username from the AppUser -AdminUser has properties added to make a proper invoice to send to. We also change alot of the doubles we use for prices to floats cause doubles are just too precise and large for nothing. And the view works better with doubles. -SMS class was added but the logic that was used to work with it is the most upsetting. Please please look through this code once you are done with all the billingServices required -Exceptions for different scenarios were anticipated and created. Look through them to find references -I finnally made the ReportService. But I didn't add all the methods I wanted and only used the ones necessary to move forward. I was force to cause the billingService needed alot of information and it was difficult getting them through the ReportController -FirebaseServices has the weird sms method added -AccountController was changed to cater for the extraction of the GetOrdersByDate() method into the ReportServices
-- Summary -- The majority of the changes were done in BillingServices, which why I was so desparate for a WordService to make the code more manageable. The problems I find apparent in this exhausted state are how we handle the SMS logic and the view for the Word doc. If those are solved much of the clutter should leave
"12:20pm. Yeah, all these complications are caused by insufficient computational power. If 500gb were the standard in desktop GPUs, doing ML would be a lot easier. At this rate, who knows how long it would take to get to that level. I got a GTX 970 7 years ago and it has 4gb. For comparison a RTX 3070 which is its equivalent in status today has 8gb. It will take a lot of time.
Even though it does not seem like it, GPU scaling is struggling as well, and Nvidia is resorting to ramping up the power reqs to get performance gains. If I take the current pace as a guideline, it will take 50 years before GPUs with 500gb could be bought for less than 1,000$.
Reaching deep into my pockets, and paying something like 3k for the biggest 40xx card of 46gb would give me a big boost, but it is just an one time thing. It is not enough to be excited about.
12:40pm. Sure AI chips will make a difference, but if I can't get a 500gb GPU, I won't be able to get an AI chip either. They are all produced on the same waffers.
12:45pm. It is all the fault of memristors not coming when they should. If they panned out back in 2015, the AI situation would be way different today. We'd have devices with terrabytes of non volatile memory that can also be used for computation.
The Singularity then would truly be near.
https://www.news-medical.net/news/20220518/New-memristors-for-neuromorphic-computing.aspx
They are still in the labs. Articles like these have been coming out throughout the years for quite a while.
Sigh, if I had to get 3k dollars to buy a memristor device with a 1Tb of memory, I would do it. It would allow me to run DALLE for example. Or alternatively, I would have succeeded in making that RL agent for poker. Right now, the most I can do with NNs is style transfer.
1:05pm. It is really a pity, this wave of AI is mostly hype. It really highlighted how great matrix multiplication is, but I want the real thing. I can't really show my power as a programmer with these resources.
Will I really be okay? I used to think I was born just in time, but maybe I was born too early for the Singularity after all.
If that is the case, I should try to have whatever fun I can with this life. If I was born in 1920, the way I'd live my life is by dreaming about it. So that is how I should live now.
1:15pm. Done with breakfast. It really breaks my heart to see my mother weak of cancer. Right now she is still fine, but a few years from now...
If I could have succeeded in making a breakthrough in AI, none of these physical limitation would have been an issue. Being a human sucks. My future looks pretty bleak at this point. I really hope I can get something from Heaven's Key and do not have to resort to wage slavery.
I'll know in a few years time where I stand. If I can't make anything of it by the end of 2025, I'll abandon the Simulacrum project.
I do not have the time to do this forever.
1:20pm. So let me start. Think about the approaching doom, and do what needs to be done. For a human, there is nothing scarier than the long term.
1:30pm. Right now I am doing some organizing. In the wip thread some anon were remarking about the wasted modeling work, but they can't even begin to imagine just how much of it didn't appear in the scene. The monitor, the desk, the rig. I spent a lot of time texturing them, only to drop it. The amount of experimentation that I've done is astonishing. While doing the cover I will aim to be, much much more efficient.
First of all, can I make the dimensions of Earth match real world sizes in Blender. I thought that this was really cool in Clarisse.
Yeah, come to think of it, I dropped a bunch of work on the room that I did in Clarisse. But it was good as it would have choked trying to calculate light going through the grills.
I'll try to make this scene in Blender as well, but if I really want to scatter a lot of stuff, I'll consider Clarisse since the scene is outdoors.
https://www.space.com/17638-how-big-is-earth.html
The radius of Earth at the equator is 3,963 miles (6,378 kilometers), according to NASA's Goddard Space Flight Center in Greenbelt, Maryland.
1:40pm. No Blender, starts acting strangely, once the object dimensions go above 1km.
https://blender.stackexchange.com/questions/68001/working-with-very-large-objects
Ah, the reason it was acting strange was due to the cliping distance.
1:50pm. Let me take a short break.
2:10pm. Let me resume. I need to focus. What is the unit scale?
https://www.youtube.com/watch?v=2PUmUF4IxaE https://www.reddit.com/r/blender/comments/l0flt0/having_trouble_with_blenders_unitsscale/
Ok, I get it.
20:20pm. Ok, now what is the raidus of the sun?
696.340 km
It is damn big. 60x times of the earth. What is distance from the earth?
150 million km.
2:30pm. https://www.google.com/search?q=sun+from+earth+orbit
2:35pm. I know I could do this the easy way, but, but...
I have to try the straightforward way first. This is my weakness. I know that trying to emulate real physical dimensions is a bad idea, but I am like an addict.
https://youtu.be/hGdU3GgbTMY How to recreate Solar System, Orbiting Earth, Sun and Space - Simple Blender Animation Tutorial
Let me watch some of these. In camera images the sun always shows up like a bright glare, but now that I am trying to place it realistically, it not even a tiny dot in the distance.
2:45pm. https://youtu.be/2pLYyn86qQU Blender Tutorial - How to Create the Sun
Let me watch this as well. The beauty of 3d is that once you learn something, you learn it for good. I am just being stubborn here. I already did a plen in Clarisse. I should know well enough how to do this, but I have this urge to find where my limits are.
Let me just have my fill of studying it and then I'll just do it. I'll scale the earth down. I'll move the sun close and scale it down as well.
https://youtu.be/2pLYyn86qQU?t=64
I'll admit, I haven't thought of trying to do a fire sim for this.
3pm. https://youtu.be/9Q8PwcDzb8Y How to Make Earth in Blender (Cycles)
This is 70m long. It is also from 6 years ago, but it is fine.
https://youtu.be/9Q8PwcDzb8Y?t=14
Yeah, I really want to make something like this. Except with a city floating in the orbit. I have really 3 elements I want to make Earth, Sun and the city. And also the stars if needed.
https://youtu.be/9Q8PwcDzb8Y?t=23
Free textures that you can get from NASA.
A big part of art is being able to get the right resources.
https://youtu.be/9Q8PwcDzb8Y?t=88
Adding night lights that only appear in the dark parts...
I was wondering how to do this. I really love this guy. He deserves his celebrity status.
Yeah, let me do it. I should just dedicate myself to mastering making the earth. The sun won't be particularly hard either. I do not want to make as stylized. remember that page where psykos splits the earth with her beam? Something like that would be good.
https://youtu.be/9Q8PwcDzb8Y?t=97
Space backgrounds. Hmmm, I made the right decision to watch a few tutorials. Taking a bit extra effort for the cover should be fine. It is going to be the very first thing the reader sees.
https://youtu.be/9Q8PwcDzb8Y?t=111
Sun flare
I wanted to know how to do this as well.
https://www.youtube.com/watch?v=pZFYQuARAAA Advanced Planet in Blender 2.9 Cycler Render
This caught my attention on the sidebar. It is a more recent video from 1 year ago, also 70m long. I'll go with the one from Andrew.
3:15pm. Let me focus on this. As far as making the Earth goes, my thinking went only as far as finding a displacement map for it, and then figuring out how to put a texture. I didn't have any idea where I would find such a texture.
https://youtu.be/9Q8PwcDzb8Y?t=279
Let me get these textures.
https://www.blenderguru.com/tutorials/earth-cycles
These are large, high quality textures. There is even a night lights one.
Also I forgot about the moon.
Yeah, I'll need to insert it into the scene as well.
3:30pm. It looks super cool. I'll try replacing bump with displacement later, but never that for now.
https://youtu.be/9Q8PwcDzb8Y?t=357
It is really much easier to put in the textures with the node wrangler. Ctrl + Shift + T and select the bump and albedo. It will set it up on its own. There is also nothing wrong with using UV coords here. Also when doing the planets it is better to use an UV sphere with large amounts of verts from the start rather than trying to subdiv it later. Subdiving it leads to shading issues.
https://youtu.be/9Q8PwcDzb8Y?t=375
Actually this is a good tutorial on what these old primitive shaders are doing.
https://www.reddit.com/r/blenderhelp/comments/knnidr/does_the_principled_bsdf_node_have_builtin/
It seems specular controls the fresnel.
...No rather than specular, a better option would be sheen.
Specular might simply control the fresnel IOR.
4:15pm. https://youtu.be/9Q8PwcDzb8Y?t=1007
Thus far, rather than follow him step by step, I tried implementing all on my own, but I didn't think of adding the cloud bumps.
4:50pm. I fall too easily into the trap of messing with params. I decided to get rid of displacement as updating the mesh was causing too much trouble.
5:05pm. https://youtu.be/9Q8PwcDzb8Y?t=1430
The way he intends to create the mask light is new to me. Let me see what he does.
5:15pm. Sigh this is ridiculous. It is so damn hard to use the normal control. Let me go with my own plan which is paint a texture.
5:25pm. I really am going to have to look into how paint textures in Blender properly. I messed something up and am not getting good results.
https://blender.community/c/rightclickselect/Mkfbbc/?sorting=hot
It says that shaders can't use vertex groups. Then...
https://www.reddit.com/r/blenderhelp/comments/8l84nl/way_to_use_vertex_group_in_mix_shader_fac/
Late af, but I hope it helps whoever comes searching next. :) There's a feature that lets you transfer a b/w mask from weights. Header menu > Paint > Vertex Color from Weights.
Somebody kind did a tip two months ago. Which header menu is he talking about. I have no idea where this is. And I should figure out how to do vertex colors first.
https://docs.blender.org/manual/en/latest/render/shader_nodes/input/vertex_color.html
Strange. Why can't I find any vertex node in the shader editor? It is called the color attribute node.
https://blender.stackexchange.com/questions/15172/turn-weight-paintvertex-groups-into-vertex-paint
https://youtu.be/B5C2QDXpIWU How to paint "Through the mesh" in Weight Paint Mode (Blender 2.83, 2.91, 2.92, 2.93+)
What a hassle.
https://youtu.be/B5C2QDXpIWU?t=244
Sigh it took him 4m what he could say in 10s.
https://youtu.be/9Q8PwcDzb8Y?t=1920
Ah, I think I know how I could make a mask that changes with the rotation. I think there is a mask modifier that generates a mask on the fly based on object intersections. I could probably make use of that for later rather than have a static vertex color map like now.
I am also going to have to figure out how to make the walk mode work because it is so slow that nothing is happening.
6:40pm. Done with lunch. I had an idea for how to make the night lights position stable. I could use the data transfer node to keep it steady on one end and rotate the Earth freely after that. Let me try it.
Damn it, why isn't data transfer working for me?
I can't believe that vertex colors are not in the vertex, but in the face corner tab.
6:55pm. It works great.
7:05pm. If I wanted to move the earth while keeping the night light constant maybe I could add a constraint to the holder so that it always faces the origin. It is an interesting idea.
The TrackTo constraint works perfectly for that.
7:10pm. The combination of Data Transfer modifier for vertex colors, and Copy Location and TrackTo constraints makes it really easy to move the Earth around. Without it I'd need to repaint the vertex colors so they match the sun's light by hand. I am using them as a mask for the night lights texture.
It is good that I am taking the time to do this as the Earth is going to get wrecked in various ways during the course of the story. If I was painting, I'd have to redo it all from scratch every time.
Now where was I? Yeah, the space HDRIs.
https://youtu.be/9Q8PwcDzb8Y?t=2056
Maybe I should just call it a day here? Let me do a render.
7:15pm. Thankfully the Cycles render is not as demanding as it could be. Still it is amazingly slow. At this rate it is going to take it 3h to finish. Well, I'll abort it by hand after 30m.
7:20pm. Er, I should not have started playing with texture painting. This time I've set a time limit.
7:35pm. Shit, I realized that sun's reflection causes a glare off the sea, but I have no idea how to deal with this. Enough fiddling around. Let me just let the render run.
It is slow. I should be able to do this whole scene in a few hours, but instead it will take me days. Well, that is about the usual for me. I am going to nail the scene down tomorrow. I'll finish the tutorial by Guru, watch the other tutorial as well. After that I'll get started on working on the city.
My goal for tomorrow will be to finish Earth, the sun, the space and the moon. I think Earth itself should be done, so that just leaves the rest.
7:50pm. Let me close here. I am tired at this point. I have been in the zone. I accept it. I've been reluctant to start, but it seems anything I really need to 3d model, I can find on Youtube. It is amazing.
I've read that pro programmers spend their time pasting from stack overflow. It seems that the same holds for 3d artists.
Today the only thing I regret is messing with displacement, turning that thing on along with adaptive subdiv made every single step unbearably tedious. In the future I'll just stick with bump. Alternatively, I should just stick to regular displacement. That might be worth a try...no, I must resist the temptation to mess with this more. Once I paint it over those details will all go away."
Fake Profile
There is a hacker named "Vijay" who has developed a method to check whether an id at some social networking site is fake or real using its username.
His method includes: if the number of distinct characters in one's user name is odd, then the user is a male, otherwise a female. You are given the string that denotes the user name, please help Vijay to determine the gender of this user by his method. Ignore the vowels. Note: The input only contains lowercase English alphabets.
Example 1 -
Input a = "jpmztf" Output: SHE!
image: add image group
Add a new view called ImageGroup that will handle all advanced image hacks from now on.
This includes the indicator (which is now animated), any selection indicators, and the weirdness of the album song image. All of that is now handled by ImageGroup. This is the culmination of probably a day and a half of wrangling with android insanity and having to remove a lot of what I liked about the indicator in order to make this work on a basic level.
The only major bug I am currently aware of with this is that the indicator is bugged out on Lollipop devices due to bad vectors. Again.
I never want to do this again. I cannot believe that adding a basic indicator took this long and required so much stupid hacks and inefficient code. And then google wonders why android apps are so visually unappealing and janky and laggy. Hm. Must be that devs aren't using the brand new FooBarBlasterFlow library!
Delete Conundrum 22 Forty Thou To A Hunned Thou Took A Shot Of Coffee I Be Going Brazy Brazy They Know We Talk That Shtick Talk Ten Million Dollars Cash Fuck A Friend I Am Going To Put My Middle Fingers In Their Mom's Butts And Wiggle Them Around.jpg
fix: --legacy-peer-deps because I'm staying on react 17 fuck you npm
Rollout of advanced microarchitecture info continues, added AMD/Intel gfx devices, CPU built dates, process nodes, generation (in some cases, where it makes sense), etc.
Please note: the 3.3.16 > 17 releases require manual matching table updates. If you think disk or ram vendor, CPU or GPU process, release date, generation, etc, information is not correct:
-
FIRST: do the research, confirm it's wrong, using wikichips, techpowerup, wikipedia links, but also be aware, sometimes these slightly contradict each- other, so research. Don't make me do all your work for you.
-
Show the relelevant data, like cpu model/stepping, to correct the issue, or model name string.
-
There are 4 main manually updated matching tables, which use either raw regex to generate the match based on the model name (ram, disk vendors), or vendor id matching (ram vendors), product id matching (gpu data), or cpu family / model / stepping id matching. Each of these has its own matching tool at: inxi-perl/tools/[tool-name].pl which is used to generate either raw data used by the functions (ids for gpu data), or which contains the master copy of the function used to generate the regex matches (cp_cpu_arch/set_ram_vendors/set_disk_vendors).
-
Please use pinxi and inxi-perl branch for this data, inxi is only released when next stable is done, all development is done in inxi-perl branch. All development for the data or functions these tools are made for occurs in the tools, not in pinxi, and those results are moved into pinxi from the tools.
-
Saying something "doesn't work" is not helpful, provide the required data for the feature that needs updating, or ideally, find the correct answer yourself and do the research and then provide the updated data for matching.
KNOWN ISSUES:
- GPU/CPU process node sizes are marketing, not engineering, terms, but work-around is to list the fab too so you at least know which set of marketing terms you're dealing with. As of around 7nm, most of the fabs are not using nm in their names anymore, TSMC is using n7, Intel 7, for example. While these marketing terms do reflect changes from the previous process node, more efficient, faster, faster per watt, and so on, and these changes are often quite significant, 10-30%, or more, they do not reflect the size of the transistor gate like they used to up until about 350nm. Intel will move to A20 for the node after 4 or 5, 2nm, meaning 20 angstroms.
Intel suggested million transistors per mm^2 as an objective measure (currently around 300+ million!! as of ~7nm), but TSMC didn't take them up on it.
GlobalFoundries (GF) stepped away from these ultra small processes at around 14nm, so you won't see GF very often in the data. AMD spun off its chip fabs to GF aound 2009, so you don't see AMD as foundry after GF was formed. ATI always used TSMC so GPU data for AMD/ATI is I think all TSMC. Intel has always been its own foundry.
-
Wayland drops all its data and can't be detected if sudo or su is used to run inxi. That's unfortunate, but goes along with their dropping support for > 1 user, which was one of the points of wayland, same reason you can do desktop sharing or ssh desktop forwarding etc. This means inxi doesn't show wayland as Display protocol, it is just blank, if you use su, or sudo start. This makes some internal inxi wayland triggers then fail. Still looking to see if there is a fix or workaround for this.
-
In sensors, a new syntax for k10-pci temp, Tctl, which unfortunately is the only temp type present for AMD family 17h (zen) and newer cpus, but that is not an actual cpu temp, it's: https://www.kernel.org/doc/html/v5.12/hwmon/k10temp.html
"Tctl is the processor temperature control value, used by the platform to control cooling systems. Tctl is a non-physical temperature on an arbitrary scale measured in degrees. It does not represent an actual physical temperature like die or case temperature."
Even worse, it replaced Tdie, which was, correctly, temp1_input, and, somewhat insanely, the non real cpu temp is now temp1_input, and if present, the real Tdie cpu temp is temp2_input. I don't know how to work around this problem.
BUGS:
-
Fallback test for Intel cpu arch was not doing anything, used wrong variable name.
-
A very old bug, thanks mrmazda for spotting this one, runlevel in case of init 3 > init 5 showed 35, not 5. Doesn't show on systemd stuff often since it doesn't use runlevels in this way, but this bug has been around a really long time.
-
SensorItem::gpu_data was always logging its data, missing the if $b_log.
FIXES:
-
Fixed some disk vendor detection rules.
-
Failing to return default target for systemd/systemctl when no: /etc/systemd/system/default.target file exists. Corrected to use systemctl get-default as fallback if file doesn't exist.
-
Fixed indentation for default: runlevel, should be child of runlevel: / target:
-
Fixed corner case where systemd has no /proc/1/comm file but is still the init system. Added fallback check for /run/systemd/units, if that exists, safe to assume systemd is running init.
-
Fixed subtle case, -h/--recommends/--version/--version-short should not print to -y1 width, but rather to the original or modified widths >= 80 cols. Corrected this in print_basic() by using max-cols-basic.
-
Forgot to add --pkg, --edid, and --gpu to debugger run_self() tool.
-
Fixed broken sandisk vendor id.
ENHANCEMENTS:
-
Added AMD and Intel GPU microarchitecture detections for -Gx. These are not as easy as Nvidia because there is no one reliable data source for product ids.
-
Going with the -Ga process: .. built: item, -Ca will show process: [node] and built: years and sometimes gen: if available. Geeky, sure, not always perfect, or correct, but will generally be close. Due to difficultly in finding reliable release > build end years for example, not all cpus have all this data.
Using CPU generation,where that data is available and makes sense. Like AMD Zen+ is zen gen: 2, for example,. Because Intel microarch names are often marketing driven, not engineering, it's too difficult to assign gen consistently based only on model names. Shows for Core intels like: gen: core 3
That will cover most consumer Intel CPU users currently.
-
Added initial Zen 3+ and Zen 4 ids for cp_cpu_arch(). There is very little info on these yet, so I'm going on what may prove to be incomplete or wrong data.
-
Added GPU process, build years for -Ga.
-
Added fallback test for gpus that we don't have product IDs for yet because dbs have not been updated. Only used for cases where it's the newest gpu series and no prodoct IDs have been found.
-
Added AMD am386 support to cp_cpu_arch... ok ok, inxi takes 9 minutes to execute on that, but there you have it.
-
Added unverified Hyprland wayland compositor detection.
-
By request, added --version-short/--vs, which outputs version info in one line if used together with other options and if not short form. With any normal line option, will output version (date) info first line, without any other option, will output 1 line version info and exit.
-
More disk vendors, ids! Much easier with new tool disk_vendors.pl.
CHANGES:
-
Deprecated --nvidia/--nv in favor of more consistent --gpu, that's easier to work with multiple vendors for advanced gpu architecture. Note for non nvidia, --gpu only adds codename, if available and different from arch name. For nvidia, it adds a lot more data.
-
Changed inxi-perl/tools tool names to more clearly reflect what function they serve.
-
Going with runlevel fixes, changed 'runlevel:' to be 'target:' if systemd. Also changed incorrect 'target:' for 'default:'.
DOCUMENTATION:
-
Updated man, help, docs/inxi-data.txt for new gpu data and tools, and to indicate switch to more generic --gpu trigger for advanced gpu data, instead of the now deprecated --nvidia/--nv, which probably will go down as the shortest lasting option documented, though of course inxi always keeps legacy syntax working, behind the scenes, it's just removed from the -h and man page in favor of --gpu. Also updated to show AMD/Intel/Nvidia now, since the data now roughly works for all three main gpus.
-
Updated pinxi README.txt to reflect the tools and how to use them and what they are for.
-
--help, man, updated for target/runlevel, default: changes for init data.
-
Updated configuration html and man for --fake-data-dir.
CODE:
-
Upgraded tools/gpu_ids.pl to handle nvidia, intel, or amd data, added data files in tools/lists/ for amd. First changed name from ids.pl to gpu_ids.pl
-
New data files added for amd/intel pci ids, and a new tool to merge them and prep them for gpu_ids.pl -j amd|intel handling. All work. Took a while to get these things sorted, but don't want to get stuck in future with manual updates, it needs to be automated as much as possible, same as with disk_vendors.pl etc, if I'm going to try to maintain this over time.
-
Made all gpu data file names use consistent formats, and made disk data files also follow this format.
-
Changed raw_ids.pl to gpu_raw.pl, trying to keep things easy to remember and consistent here.
-
Refactored core gpu data logic, now all types use the same sub, and just assign various data depending on the type.
-
Changed vendors.pl name to disk_vendors.pl
-
Big redo of array/hash handling in OutputHandler, was partially by reference, now is completely by reference. All Items now use and return $rows array ref as well, from start to finish, unlike previously, where @rows was copied repeatedly.
-
Going along with 7, made most internal passing of hash/arrays use hash/array references instead, where it makes sense, and doesn't make the code harder to work with.
-
Refactored WeatherItem, split apart the parts from output to be more like normal Items in terms of error handling etc.
-
Added 'ref' return option for reader() and grabber(). Only useful for very large data sets, added also default 'arr' if no value is provided for that argument.
-
Switched some features to use grabber/reader by ref on the off chance that will dump some execution time.
-
A few places added qr/.../ precompiled regex, in simple form, for loops, maybe it helps a little. I don't know.
-
Added global $fake_data_dir, this can be changed via configuration item: FAKE_DATA_DIR or one time by --fake-data-dir.
-
Created data directory, and initial data items. cpu is the fake data used to test CPU info. More will be added as data is checked and sanitized.
Fix low priority issues (#7413)
Thanks @svetkereMS for bringing this up, driving, and testing.
This fixes two interconnected issues. First, if a process starts at normal priority then changes to low priority, it stays at normal priority. That's good for Visual Studio, which should stay at normal priority, but we relied on passing priority from a parent process to children, which is no longer valid. This ensures that we set the priority of a process early enough that we get the desired priority in worker nodes as well.
Second, if we were already connected to normal priority worker nodes, we could keep using them. This "shuts down" (disconnects—they may keep running if nodeReuse is true) worker nodes when the priority changes between build submissions.
One non-issue (therefore not fixed) is connecting to task hosts that are low priority. Tasks host nodes currently do not store their priority or node reuse. Node reuse makes sense because it's automatically off always for task hosts, at least currently. Not storing low priority sounds problematic, but it's actually fine because we make a task host—the right priority for this build, since we just made it—and connect to it. If we make a new build with different priority, we disconnect from all nodes, including task hosts. Since nodeReuse is always false, the task host dies, and we cannot reconnect to it even though if it didn't immediately die, we could, erroneously.
On the other hand, we went a little further and didn't even specify that task hosts should take the priority assigned to them as a command line argument. That has been changed.
svetkereMS had a chance to test some of this. He raised a couple potential issues:
conhost.exe launches as normal priority. Maybe some custom task dlls or other (Mef?) extensions will do something between MSBuild start time and when its priority is adjusted. Some vulnerability if MSBuild init code improperly accounts for timing For (1), how is conhost.exe related to MSBuild? It sounds like a command prompt thing. I don't know what Mef is. For (2), what vulnerability? Too many processes starting and connecting to task hosts with different priorities simultaneously? I could imagine that being a problem but don't think it's worth worrying about unless someone complains.
He also mentioned a potential optimization if the main node stays at normal priority. Rather than making a new set of nodes, the main node could change the priority of all its nodes to the desired priority. Then it can skip the handshake, and if it's still at normal priority, it may be able to both raise and lower the priority of its children. Since there would never be more than 2x the "right" number of nodes anyway, and I don't think people will be switching rapidly back and forth, I think maybe we should file that as an issue in the backlog and get to it if we have time but not worry about it right now.
Edit: I changed "shuts down...worker nodes when the priority changes" to just changing their priority. This does not work on linux or mac. However, Visual Studio does not run on linux or mac, and VS is the only currently known customer that runs in normal priority but may change between using worker nodes at normal priority or low priority. This approach is substantially more efficient than starting new nodes for every switch, disconnecting and reconnecting, or even maintaining two separate pools for different builds.
People listen up don't stand so close, I got somethin that you all should know. Holy matrimony is not for me, I'd rather die alone in misery.