3,199,386 events, 1,644,285 push events, 2,482,467 commit messages, 185,289,075 characters
Pain and Garbage
Fucked with the shaders for a bit, but didn't really get anywhere. Fucked with the flowmap for a bit, and kinda got somewhere but broke my brain after a few hours.
Makes tank explosions scale with volume and have diminishing returns. (Nerfs singlecaps) (#60600)
Changes tank explosions to take tank volume into account and use sqrt scaling when calculating explosion range. This basically means that they scale faster at lower pressures and slower at high pressures. Rebalances tank explosion scaling so that maxcap TTVs are where they used to be pressure-wise. Rebalances the research doppler arrays cash generation algorithm so it maxes out at the same TTV pressure. This does however mean that the doppler array will grant more points at lower explosion pressures. Rebalances blastcannon shot range calculation so it scales as it used to with normal TTVs.
The comparatively tiny emergency tanks no longer produce the same size explosion as a TTV at the same pressure. It is much more difficult to carry around 70 maxcaps in a single duffle bag. (I don't think it renders this completely impossible but it does kill oxy-trit emergency tank singlecaps as far as I know.)
Lemon posting past this line.
How it works:
Change assumes maxcaps should be just as easy with the standard ttv setup of 2 70L tanks. So it divides the bomb's strength by 14, then scales it using dyn_explosion's (x*2)^0.5. If you graph it the strength is exactly the same with a 140L reaction vessel, but as volume goes down, strength falls off very quickly because of that division, and the use of dyn_explosion.
Hopefully this will effectively disincentivize singlecapping, and remove the everpresent threat of someone leaking the station leveling method.
Reasoning for when github blows up:
I don't think single caps are on the same level as typical atmos antag threats. They're a hell problem 1: tanks should explode when someone hyper pressurizes them 2: we want all tank explosions to act the same, for the sake of a believable world 3: really well put together tank explosions (ttvs), should be really powerful 4: reaction code is a son of a bitch
I do think knowledge gating has some place. Knowing how to do something well should have a benefit. but that isn't like, an ultimate truth. I've seen what proper, full on atmos autism single capping looks like. I don't like that level of absolute destruction at speed being feasible full stop.
I consider single caps to be a necessary side effect of how explosion code works. I think it's really cool that people have gotten so deep into this game and the systems around it that they've started optimizing this side effect into a tool/bragging rights thing. But I'm still not a huge fan. If big booms are gated only by knowledge, then as soon as that knowledge spreads we're fucked. I've seen this happen before with things like rad batteries (cue crit being cringe). It's not just single caps mind, the destruction you can make with em scales with knowledge.
I'm not in love with this pr mind, because it means I need to worry about bomb code when someone makes some silly tank volume balance pr. but it's a good solution. better then what's been tried in the past. still leaves space for things just blowing up in your face without maxcaps coming into the equation easily.
Lowers the cost of the obsessed midround ruleset from 10 to 3. (#61370)
Obsessed is a really weak antagonist whose objectives revolve around creeping on a single crewmember. He doesn't have any special ability whatsoever other than suffering from heavy butterflies in the stomach when his mood is great or above and his presence barely affects the round. They shouldn't have the same cost of other rulsets like swarmers, pirates, ninjas and nightmare, or even latejoin traitors.
Fixing race condition, read description
Pasting ibhagwan's lovely description of the issue.
I just solved a bug in fzf-lua that uses similar code to choices_to_shell_cmd_previewer, it might affect nvim-fzf so you may want to look into it.
While implementing a "remembering" live_grep (i.e. every keystroke gets saved so that you can continue your live search exactly where you left off), I had implemented a raw_async_function with code borrowed from choices_to_shell_cmd_previewer that runs rg in the background and populates the results in fzf.
While testing this I noticed that the result set would vary randomly in size and the function was returning a partial result set.
Troubleshooting the issue I discovered that read_cb could be called multiple times before even a single uv.write is complete, imagine a print at the start of read_cb and at the completed callback of uv_write, the printout would more often than not look like this:
read_cb uv.write read_cb uv.write read_cb read_cb read_cb <-- end of data uv.write uv.write uv.write
Now let's assume that the marked read_cb is be the last call and would be sent with data = nil - that would trigger a cleanup() call, close the fzf pipe and the all the following uv.write calls would err.
Hope this makes sense over text.
Now for the solution, I implemented a simple read_cb_count variable and made sure that the pipe wouldn't be closed unless the last uv.write call is complete (i.e. read_cb_count = 0)
You can take a look at the code here, hopefully this would make sense after you read the code, lmk your thoughts?
Update README.md
Linux kernel release 4.x http://kernel.org/
These are the release notes for Linux version 4. Read them carefully, as they tell you what this is all about, explain how to install the kernel, and what to do if something goes wrong.
Linux is a clone of the operating system Unix, written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. It aims towards POSIX and Single UNIX Specification compliance.
It has all the features you would expect in a modern fully-fledged Unix, including true multitasking, virtual memory, shared libraries, demand loading, shared copy-on-write executables, proper memory management, and multistack networking including IPv4 and IPv6.
It is distributed under the GNU General Public License v2 - see the accompanying COPYING file for more details.
Although originally developed first for 32-bit x86-based PCs (386 or higher), today Linux also runs on (at least) the Compaq Alpha AXP, Sun SPARC and UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH, Cell, IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64, AXIS CRIS, Xtensa, Tilera TILE, ARC and Renesas M32R architectures.
Linux is easily portable to most general-purpose 32- or 64-bit architectures as long as they have a paged memory management unit (PMMU) and a port of the GNU C compiler (gcc) (part of The GNU Compiler Collection, GCC). Linux has also been ported to a number of architectures without a PMMU, although functionality is then obviously somewhat limited. Linux has also been ported to itself. You can now run the kernel as a userspace application - this is called UserMode Linux (UML).
-
There is a lot of documentation available both in electronic form on the Internet and in books, both Linux-specific and pertaining to general UNIX questions. I'd recommend looking into the documentation subdirectories on any Linux FTP site for the LDP (Linux Documentation Project) books. This README is not meant to be documentation on the system: there are much better sources available.
-
There are various README files in the Documentation/ subdirectory: these typically contain kernel-specific installation notes for some drivers for example. See Documentation/00-INDEX for a list of what is contained in each file. Please read the :ref:
Documentation/process/changes.rst <changes>
file, as it contains information about the problems, which may result by upgrading your kernel.
-
If you install the full sources, put the kernel tarball in a directory where you have permissions (e.g. your home directory) and unpack it::
xz -cd linux-4.X.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel.
Do NOT use the /usr/src/linux area! This area has a (usually incomplete) set of kernel headers that are used by the library header files. They should match the library, and not get messed up by whatever the kernel-du-jour happens to be.
-
You can also upgrade between 4.x releases by patching. Patches are distributed in the xz format. To install by patching, get all the newer patch files, enter the top level directory of the kernel source (linux-4.X) and execute::
xz -cd ../patch-4.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current source tree, in_order, and you should be ok. You may want to remove the backup files (some-file-name~ or some-file-name.orig), and make sure that there are no failed patches (some-file-name# or some-file-name.rej). If there are, either you or I have made a mistake.
Unlike patches for the 4.x kernels, patches for the 4.x.y kernels (also known as the -stable kernels) are not incremental but instead apply directly to the base 4.x kernel. For example, if your base kernel is 4.0 and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1 and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is, patch -R) before applying the 4.0.3 patch. You can read more on this in :ref:
Documentation/process/applying-patches.rst <applying_patches>
.Alternatively, the script patch-kernel can be used to automate this process. It determines the current kernel version and applies any patches found::
linux/scripts/patch-kernel linux
The first argument in the command above is the location of the kernel source. Patches are applied from the current directory, but an alternative directory can be specified as the second argument.
-
Make sure you have no stale .o files and dependencies lying around::
cd linux make mrproper
You should now have the sources correctly installed.
Compiling and running the 4.x kernels requires up-to-date
versions of various software packages. Consult
:ref:Documentation/process/changes.rst <changes>
for the minimum version numbers
required and how to get updates for these packages. Beware that using
excessively old versions of these packages can cause indirect
errors that are very difficult to track down, so don't assume that
you can just update packages when obvious problems arise during
build or operation.
When compiling the kernel, all output files will per default be
stored together with the kernel source code.
Using the option make O=output/dir
allows you to specify an alternate
place for the output files (including .config).
Example::
kernel source code: /usr/src/linux-4.X
build directory: /home/name/build/kernel
To configure and build the kernel, use::
cd /usr/src/linux-4.X
make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install
Please note: If the O=output/dir
option is used, then it must be
used for all invocations of make.
Do not skip this step even if you are only upgrading one minor
version. New configuration options are added in each release, and
odd problems will turn up if the configuration files are not set up
as expected. If you want to carry your existing configuration to a
new version with minimal work, use make oldconfig
, which will
only ask you for the answers to new questions.
-
Alternative configuration commands are::
"make config" Plain text interface.
"make menuconfig" Text based color menus, radiolists & dialogs.
"make nconfig" Enhanced text based color menus.
"make xconfig" Qt based configuration tool.
"make gconfig" GTK+ based configuration tool.
"make oldconfig" Default all questions based on the contents of your existing ./.config file and asking about new config symbols.
"make silentoldconfig" Like above, but avoids cluttering the screen with questions already answered. Additionally updates the dependencies.
"make olddefconfig" Like above, but sets new symbols to their default values without prompting.
"make defconfig" Create a ./.config file by using the default symbol values from either arch/$ARCH/defconfig or arch/$ARCH/configs/${PLATFORM}_defconfig, depending on the architecture.
"make ${PLATFORM}_defconfig" Create a ./.config file by using the default symbol values from arch/$ARCH/configs/${PLATFORM}_defconfig. Use "make help" to get a list of all available platforms of your architecture.
"make allyesconfig" Create a ./.config file by setting symbol values to 'y' as much as possible.
"make allmodconfig" Create a ./.config file by setting symbol values to 'm' as much as possible.
"make allnoconfig" Create a ./.config file by setting symbol values to 'n' as much as possible.
"make randconfig" Create a ./.config file by setting symbol values to random values.
"make localmodconfig" Create a config based on current config and loaded modules (lsmod). Disables any module option that is not needed for the loaded modules.
To create a localmodconfig for another machine, store the lsmod of that machine into a file and pass it in as a LSMOD parameter. target$ lsmod > /tmp/mylsmod target$ scp /tmp/mylsmod host:/tmp host$ make LSMOD=/tmp/mylsmod localmodconfig The above also works when cross compiling.
"make localyesconfig" Similar to localmodconfig, except it will convert all module options to built in (=y) options.
You can find more information on using the Linux kernel config tools in Documentation/kbuild/kconfig.txt.
-
NOTES on
make config
:-
Having unnecessary drivers will make the kernel bigger, and can under some circumstances lead to problems: probing for a nonexistent controller card may confuse your other controllers.
-
A kernel with math-emulation compiled in will still use the coprocessor if one is present: the math emulation will just never get used in that case. The kernel will be slightly larger, but will work on different machines regardless of whether they have a math coprocessor or not.
-
The "kernel hacking" configuration details usually result in a bigger or slower kernel (or both), and can even make the kernel less stable by configuring some routines to actively try to break bad code to find kernel problems (kmalloc()). Thus you should probably answer 'n' to the questions for "development", "experimental", or "debugging" features.
-
-
Make sure you have at least gcc 3.2 available. For more information, refer to :ref:
Documentation/process/changes.rst <changes>
.Please note that you can still run a.out user programs with this kernel.
-
Do a
make
to create a compressed kernel image. It is also possible to domake install
if you have lilo installed to suit the kernel makefiles, but you may want to check your particular lilo setup first.To do the actual install, you have to be root, but none of the normal build should require that. Don't take the name of root in vain.
-
If you configured any of the parts of the kernel as
modules
, you will also have to domake modules_install
. -
Verbose kernel compile/build output:
Normally, the kernel build system runs in a fairly quiet mode (but not totally silent). However, sometimes you or other kernel developers need to see compile, link, or other commands exactly as they are executed. For this, use "verbose" build mode. This is done by passing
V=1
to themake
command, e.g.::make V=1 all
To have the build system also tell the reason for the rebuild of each target, use
V=2
. The default isV=0
. -
Keep a backup kernel handy in case something goes wrong. This is especially true for the development releases, since each new release contains new code which has not been debugged. Make sure you keep a backup of the modules corresponding to that kernel, as well. If you are installing a new kernel with the same version number as your working kernel, make a backup of your modules directory before you do a
make modules_install
.Alternatively, before compiling, use the kernel config option "LOCALVERSION" to append a unique suffix to the regular kernel version. LOCALVERSION can be set in the "General Setup" menu.
-
In order to boot your new kernel, you'll need to copy the kernel image (e.g. .../linux/arch/x86/boot/bzImage after compilation) to the place where your regular bootable kernel is found.
-
Booting a kernel directly from a floppy without the assistance of a bootloader such as LILO, is no longer supported.
If you boot Linux from the hard drive, chances are you use LILO, which uses the kernel image as specified in the file /etc/lilo.conf. The kernel image file is usually /vmlinuz, /boot/vmlinuz, /bzImage or /boot/bzImage. To use the new kernel, save a copy of the old image and copy the new image over the old one. Then, you MUST RERUN LILO to update the loading map! If you don't, you won't be able to boot the new kernel image.
Reinstalling LILO is usually a matter of running /sbin/lilo. You may wish to edit /etc/lilo.conf to specify an entry for your old kernel image (say, /vmlinux.old) in case the new one does not work. See the LILO docs for more information.
After reinstalling LILO, you should be all set. Shutdown the system, reboot, and enjoy!
If you ever need to change the default root device, video mode, ramdisk size, etc. in the kernel image, use the
rdev
program (or alternatively the LILO boot options when appropriate). No need to recompile the kernel to change these parameters. -
Reboot with the new kernel and enjoy.
-
If you have problems that seem to be due to kernel bugs, please check the file MAINTAINERS to see if there is a particular person associated with the part of the kernel that you are having trouble with. If there isn't anyone listed there, then the second best thing is to mail them to me ([email protected]), and possibly to any other relevant mailing-list or to the newsgroup.
-
In all bug-reports, please tell what kernel you are talking about, how to duplicate the problem, and what your setup is (use your common sense). If the problem is new, tell me so, and if the problem is old, please try to tell me when you first noticed it.
-
If the bug results in a message like::
unable to handle kernel paging request at address C0000010 Oops: 0002 EIP: 0010:XXXXXXXX eax: xxxxxxxx ebx: xxxxxxxx ecx: xxxxxxxx edx: xxxxxxxx esi: xxxxxxxx edi: xxxxxxxx ebp: xxxxxxxx ds: xxxx es: xxxx fs: xxxx gs: xxxx Pid: xx, process nr: xx xx xx xx xx xx xx xx xx xx xx
or similar kernel debugging information on your screen or in your system log, please duplicate it exactly. The dump may look incomprehensible to you, but it does contain information that may help debugging the problem. The text above the dump is also important: it tells something about why the kernel dumped code (in the above example, it's due to a bad kernel pointer). More information on making sense of the dump is in Documentation/admin-guide/oops-tracing.rst
-
If you compiled the kernel with CONFIG_KALLSYMS you can send the dump as is, otherwise you will have to use the
ksymoops
program to make sense of the dump (but compiling with CONFIG_KALLSYMS is usually preferred). This utility can be downloaded from https://www.kernel.org/pub/linux/utils/kernel/ksymoops/ . Alternatively, you can do the dump lookup by hand: -
In debugging dumps like the above, it helps enormously if you can look up what the EIP value means. The hex value as such doesn't help me or anybody else very much: it will depend on your particular kernel setup. What you should do is take the hex value from the EIP line (ignore the
0010:
), and look it up in the kernel namelist to see which kernel function contains the offending address.To find out the kernel function name, you'll need to find the system binary associated with the kernel that exhibited the symptom. This is the file 'linux/vmlinux'. To extract the namelist and match it against the EIP from the kernel crash, do::
nm vmlinux | sort | less
This will give you a list of kernel addresses sorted in ascending order, from which it is simple to find the function that contains the offending address. Note that the address given by the kernel debugging messages will not necessarily match exactly with the function addresses (in fact, that is very unlikely), so you can't just 'grep' the list: the list will, however, give you the starting point of each kernel function, so by looking for the function that has a starting address lower than the one you are searching for but is followed by a function with a higher address you will find the one you want. In fact, it may be a good idea to include a bit of "context" in your problem report, giving a few lines around the interesting one.
If you for some reason cannot do the above (you have a pre-compiled kernel image or similar), telling me as much about your setup as possible will help. Please read the :ref:
admin-guide/reporting-bugs.rst <reportingbugs>
document for details. -
Alternatively, you can use gdb on a running kernel. (read-only; i.e. you cannot change values or set break points.) To do this, first compile the kernel with -g; edit arch/x86/Makefile appropriately, then do a
make clean
. You'll also need to enable CONFIG_PROC_FS (viamake config
).After you've rebooted with the new kernel, do
gdb vmlinux /proc/kcore
. You can now use all the usual gdb commands. The command to look up the point where your system crashed isl *0xXXXXXXXX
. (Replace the XXXes with the EIP value.)gdb'ing a non-running kernel currently fails because
gdb
(wrongly) disregards the starting offset for which the kernel is compiled.
Fix RLP serialisation of seq[Transaction]
used in eth
protocol
-
Generalises the special cases for serialising RLP
seq[Transaction]
. Previously it only used the special case insideBlockBody
andEthBlock
. Now it uses it for allseq[Transaction]
regardless of what objects they are parts of, or no object at all.openArray[Transaction]
is also included, as this was found to be necessary to match in some places. -
Bug fix parsing
Transaction
: Always read the first byte to get the transaction type instead of parsing an RLPint
. This way invalid or adversarial input gives a correct error (i.e. invalid type code).When it was read with
rlp.read(int)
, those inputs gave many crazy messages (e.g. "too large to fit in memory"). In the specification it's a byte. (Technically the input is not RLP and we shouldn't be using the RLP parser anyway to parse standalone transaction objects). -
Bug fix parsing
Transaction
: If a typed transaction is detected inseq[Transaction]
, the previous code removed the RLP (blob) wrapper, then passed the contents toread(Transaction)
. That meant a blob-wrapped legacy transaction would be accepted. This is incorrect. The new code passes the contents to the typed transaction decoder, which correctly rejects a wrapped legacy transaction as having invalid type.
Change 1 has a large, practical effect on eth/65
syncing with peers.
Serialisation of eth
message types Transactions
and PooledTransactions
have been broken since the introduction of typed transactions (EIP-2718), as
used in Berlin/London forks. (The special case for seq[Transaction]
inside
BlockBody
only fixed message type BlockBodies
.)
Due to this, whenever a peer sent us a Transactions
message, we had an RLP
decoding error processing it, and disconnected the peer thinking it was the
peer's error.
These messages are sent often by good peers, so whenever we connected to a really good peer, we'd end up disconnecting from it within a few tens of seconds due to this.
This didn't get noticed before updating to eth/65
, because with old protocols
we tend to only connect to old peers, which may be out of date themselves and
have no typed transactions. Also, we didn't really investigate occasional
disconnects before, we assumed they're just part of P2P life.
The root cause is the RLP serialisation of individual Transaction
is meant to
be subtly different from arrays/sequences of Transaction
objects in network
messages. RFC-2976 covers this but it's quite subtle:
-
Individual transactions are encoded and stored as either
RLP([fields..])
for legacy transactions, orType || RLP([fields..])
. Both of these encodings are byte sequences. The part afterType
doesn't have to be RLP in theory, but all types so far use RLP. EIP-2718 covers this. -
In arrays (sequences), transactions are encoded as either
RLP([fields..])
for legacy transactions, orRLP(Type || RLP([fields..]))
for all typed transactions to date. Spot the extraRLP(..)
blob encoding, to make it valid RLP inside a larger RLP. EIP-2976 covers this, "Typed Transactions over Gossip", although it's not very clear about the blob encoding.
In practice the extra RLP(..)
applies to all arrays/sequences of transactions
that are to be RLP-encoded as a list. In principle, it should be all
aggregates (object fields etc.), but it's enough for us to enable it for all
arrays/sequences, as this is what's used in the protocol and EIP-2976.
Signed-off-by: Jamie Lokier [email protected]
Helix Testing (#6992)
Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests.
- #7281 - re-enable local tests that were disabled to turn on Helix
- #7282 - re-enable UIA tests that were disabled to turn on Helix
- #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts
- #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on.
- #671 - Making Terminal UIA tests is now possible
- #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO.
- #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back!
- Closes #3838
- I work here.
- Literally adds tests.
- Should I update a testing doc in this repo?
- Am core contributor. Hear me roar.
- Correct spell-checking the right way before merge.
We have had two classes of tests that don't work in our usual build-machine testing environment:
- Tests that require interactive UI automation or input injection (a.k.a. require a logged in user)
- Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.)
The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml.
This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs. You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates
Some of the modifications in the files include (but are not limited to) reasons like:
- Our test binaries are named differently than MUX's test binaries
- We don't need certain types of testing that MUX does.
- We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#)
- Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions).
The build now runs in a few stages:
- The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single
drop
artifact to make life a little easier.) - The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool.
- Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.)
- A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities.
We are set to run Helix tests on the Feature test policy of only x64 for now.
Additionally, because the set up of the Helix VMs takes so long, we are NOT running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per).
In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example.
I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly.
- Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI
- Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
mm, fs: Add vm_ops->name as an alternative to arch_vma_name
arch_vma_name sucks. It's a silly hack, and it's annoying to implement correctly. In fact, AFAICS, even the straightforward x86 implementation is incorrect (I suspect that it breaks if the vdso mapping is split or gets remapped).
This adds a new vm_ops->name operation that can replace it. The followup patches will remove all uses of arch_vma_name on x86, fixing a couple of annoyances in the process.
Signed-off-by: Andy Lutomirski [email protected] Link: http://lkml.kernel.org/r/2eee21791bb36a0a408c5c2bdb382a9e6a41ca4a.1400538962.git.luto@amacapital.net Signed-off-by: H. Peter Anvin [email protected]
"8:55am. I am really into Ender Lilies, so much that I've been slipping into a habit of going to bed later just to get more game time in. Yesterday's gaming session was immensely satisfying. This time the Youtube sidebar did a really good thing by recommending me this game's OST.
On a side note, as annoying as Anti was, he is probably right that Hollow Knight is great. I should give it a try at some point in the future.
9:05am. Any mail? No.
9:35am. Let me start. What should I do first?
Let me check out if Julia has effect handlers.
https://github.com/MikeInnes/Effects.jl
[Caveat Emptor: Julia's compiler is not designed to handle this kind of code and reserves the right to complain / be slow. Also, while everything theoretically composes nicely, this is not thoroughly tested.]
http://pyro.ai/examples/effect_handlers.html
Pyro has effect handlers? That is interesting.
One choice that I have is to stick with Python and check out Pyro.
https://discourse.julialang.org/t/minijyro-a-toy-pyro-like-ppl/37356/2
Turing contexts and Pyro effect handlers seem almost identical. Both target modifying the behavior of a probabilistic program without manually modifying it. Turing contexts have zero run-time overhead though thanks to the beautiful Julia compiler.
The zero run-time overhead can be achieved because Julia compiles a different method for each context type. There is nothing Turing-specific here. We just have different types for different contexts and use multiple dispatch to define the specialized behavior of each context. Then Julia does the rest.
http://pyro.ai/examples/effect_handlers.html Poutine: A Guide to Programming with Effect Handlers in Pyro
9:50am. I am familiar with DL, but PPLs are new to me.
http://pyro.ai/examples/intro_part_i.html
It seems that Pyro also uses explicit addresses which is fine for Python. I guess I'll study these tutorials.
9:55am. Yeah, I can't just ignore what is possible on the Python side of things. Whether I want to use Python or Julia depends whether or not I want to stick with NNs. If I want to try a no-NN run I am going to need Julia's speed benefits.
10:25am. I am having a crystallizing experience as I think of the implications of using PPLs to improve tabular CFR.
10:45am.
Pyro is built to enable stochastic variational inference, a powerful and widely applicable class of variational inference algorithms with three key characteristics:
11:05am. It is all coming to me. Let me take a short break here.
11:25am. Yeah, it is coming to me. When I first saw CFR I was really surprised. Unlike RL, it did some kind of Bayesian thing with the averaging of categorical policies. The reason this was so mindblowing is because it extended vanilla tabular RL which I could not even imagine how to do at the time.
11:30am. What would be the next step past that? I thought that NNs were it. But I was wrong. CFR is is really a probabilistic program with just a single categorical variable followed by an observation of the reward.
No, not CFR itself - a single node is what I am refering to.
Then the whole of CFR would be a nested Bayesian optimization process. Vanilla sampling CFR cannot account for variance reduction. Using the game state as node addresses would just make it MC.
It is possible to go further. Replace the uniform policy with a richer probabilistic program, and what tabular RL gives us is a way of doing nested Bayesian inference. Nothing else makes sense for propagating credit across arbitrary program nodes.
Nested Bayesian inference is just asynchronous inference. As long as there is a controller to distribute the work to different nodes it will work. The only real requirement is that the traces be non-recursive.
This requirement is something that deep RL breaks absolutely. It is probably the main reason why the optimization is so harsh there.
Yeah, I was aware that I was breaking the non-recursive trace requirement by moving to deep RL, but I did not know how to deal with it.
12:10pm. Let me have breakfast here. I spent all this time in thought instead of reading the Pyro docs, but who cares about that.
I feel like I have it. I am on the cusp of breaking into 3/5 in ML. In order to actually do it I will have to execute and practice my plan, but it will be doable with effort. Without a doubt, I'll be able to make that poker agent if I follow this path.
12:20pm. I think that seeing Gen and now Pyro use addresses explicitly must have jogged my mind. My first instict is see this as a design error, but in fact explicit addressing is what would allow me to bridge tabular RL and PProg.
I came up with the elegant trace scheme yesterday, but there is no rule that it has to be done that way. In fact this way is easier as I can do replace
and causal inference much more readily.
12:25pm. Addressing is at root of everything in PProg. It seems like a side thing, but it is really a front and center concern.
12:30pm. I've changed my mind on Gen. It is not a bad choice to make addressing explicit even if adds verbiage. It would be easy to make a check to make sure that the same var is not being sampled from twice.
The issue of addressing is something I should be flexible on. In fact, the reason why Bayesian math is so confusing is because no address is associated with it. Neither is CPSing.
The lens of probabilistic programming is the right way to approach Bayesian reasoning.
12:40pm. Let me have breakfast here instead of just ranting.
This is so great. A month ago when I threw in the towel I was broken, but now I see hope again. I am really lucky after all."
[IMP] hr_work_entry: Improve work entries generation perf
The work entries generation is not scaling properly for large employee datasets. A lot of improvements have been made to reduce the number of query to the database. However, one of the remaining bottleneck was the new records insertion, made 1 by 1 on the database.
Due to the fact that the model doens't have a lot of columns, and doens't have columns containing too much data (HTML fields for example), we could imagine inserting the records by batch, for example 1000 by 1000, as it is done with SELECT.
Note, that only the call to the create method is tracked, not the preprocessing time to retrieve the work entries values.
records - Elapsed Time (s) - AVG Time per record (s) 10 - 0.0045862197875976 - 0.00045862197875976 432 - 0.1101946830749511 - 0.00025508028489572 192 - 0.0525383949279785 - 0.00027363747358322 522 - 0.1418645381927490 - 0.00027177114596312 892 - 0.2803149223327636 - 0.00031425439723404 8800 - 3.4928441047668457 - 0.00039691410281441 88000 - 188.82338452339172 - 0.00214572027867490 176000 - 1172.4313135147095 - 0.00666154155406084
We observe that from 10.000 new record, the create method is not scaling anymore before this commit.
For a company with 1000 employees, the mean work entries by month is 1000221=42.000 work entries, and the time to create the records is not acceptable.
records - Elapsed Time (s) - AVG Time per record (s) 10 - 0.003406763076782 - 0.0003406763076782 432 - 0.075028181076049 - 0.0001736763450834 192 - 0.030718326568603 - 0.0001599912842114 522 - 0.084228038787841 - 0.0001613563961452 892 - 0.145947217941284 - 0.0001636179573332 8800 - 2.204506397247314 - 0.0002505120905962 88000 - 181.9736533164978 - 0.0020678824240511 176000 - 1146.928646564483 - 0.0065166400372981
We observe a sligh improvement, but cleary not enough and not worth the complexity of inserting the records in batch on the database.
In fact, this is a limitation of PosgreSQL. In the implementation that manages the transactions, the number of inserts becomes greater than certain memory limits, suddenly it falls on a slower alternative storage (this is the problem with SELECT and tuples of ids ).
When we inject 100,000 ids into a query, PosgreSQL parses the query (already there it could have some trouble), then it stores the ids in a data structure: hash if not too large, other (on the file system) otherwise. Then it executes the query and uses the data structure to check the validity (membership).
Instead of inserting the record in batch in the same transaction, it would be more interesting to split the transaction into several transactions: Each transaction process N employees who have not yet been processed, until there are no more. With the new cron trigger mechanism, it is possible to:
- When the cron runs, it processes N employees (N to be determined)
- If there is still some, he retriggers itself at the end of his transaction Like that, the cron is scheduled 1x / day, but it is retriggered as many times as necessary each month.
Regarding the number of employees to process, with 100 employees, we can expect:
100 * 2 (morning / evening) * 21 (working days) = 4200
work entries to generate, which is manageable given the above measures.
TaskID: 2646056
fuck this shit cunt fucking dumb as fuck testing cunt fucker library. Just let me import my fucking code to test. You dumb slut fuck
"1:35pm. Done with breakfast. Let me chill a bit. Today seems to be a thinking day. Let me finish up here, then I'll go through the intro PyTorch tutorial that I've been staring blankly for a few hours. After that I'll look into nested inference papers.
2:35pm. Let me resume. I've been in my own world today. I am not even distracted by fiction, just my own thoughts.
Today's ideas are huge. Cracking nested inference is what I need to make the agent viable. And now the path leading to the solution has been discovered. I now have a concrete goal instead of doing random prob prog work in hopes of getting inspiration.
My starting point - tabular CFR, was not wrong. Where I went wrong is betting it all on deep learning. That was not the right path to go down. Instead I should have used CFR as a model to generalize probabilistic programming. I'll have to look around to figure out whether I can adapt any of the existing libraries for nested inference. But if not, I'll just do my own thing. It is not a problem.
http://pyro.ai/examples/intro_part_ii.html#Flexible-Approximate-Inference-With-Guide-Functions
Focus me. Let me just read this.
2:45pm.
What we can do instead is use the top-level function pyro.param to specify a family of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called variational inference.
If I want to learn about VI, I should study Pyro. It is made for that purpose.
I am not going to be able to get away with using something like SSMH. Because I no longer have local rewards, I'll have to use some kind of gradient method in order to adjust the proposal distro.
2:55pm. In addition to nested inference, I should look up Hinton's dynamic routing.
3:20pm. Ah, let me take a break. I can't focus on the PyTorch tutorial even a little.
I suppose days like these are fine on occassion. This warants a singificant change of plans. Forget Z's examples and Omega.
I am going to study and play around with variational inference in depth. I'll get some experience actually using the existing libraries.
3:55pm. Let me resume. I can't believe it is already almost 4pm.
Pyro and Gen tutorials should be my main focus for the next few days. If in the future I get paid part-time work on PPLs that is fine, but if not that is fine too. Now that I have my current ideas, the quest is alive again. I know that all I have to do is master PPLs and I will have my elite agent. This time I will succeed because not only will I have all the strengths of tabular CFR, I will be able to use PPLs to build in biases that will vastly increase the efficiency of training.
I just have to put in the work, and I will accomplish my long cherished dream.
These useless programming skills that I've been so painstakingly refining for years and years will become actual power.
4:30pm. http://pyro.ai/examples/svi_part_ii.html
4:45pm. http://pyro.ai/examples/svi_part_ii.html#Subsampling-when-there-are-only-local-random-variables
There is a lot of that I do not understand about VI.
http://pyro.ai/examples/svi_part_iii.html#Variance-or-Why-I-Wish-I-Was-Doing-MLE-Deep-Learning
Variance or Why I Wish I Was Doing MLE Deep Learning
Lol.
5:20pm. http://pyro.ai/examples/svi_part_iv.html
Let me skim this and then I'll have lunch and call it a day. I can feel the hands of fate pushing at my back. It is time to get serious about the quest again. I am going to figure out the right way to make that agent. I will find a way to do it that would be good even without AI chips. I will be able to make huge strides with probabilistic programming. As long as I keep cultivating this skill, I will be able to break through to 3/5 which will be enough to make the agent, and then 4/5 which will make me a force to be reckoned with.
There is a huge amount of work waiting for me, but I will surmount it.
5:35pm. http://pyro.ai/examples/bayesian_regression.html
Let me stop the Pyro tut here.
https://arxiv.org/abs/1710.09829 Dynamic Routing Between Capsules
A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
I'll take a look at this tomorrow."
i kinda fucked storm and his friends too hard
also had 7 times peace and war with zephir during that time xd
Hi Royal Society Open whatever. Screw you for wasting my evening just for creating a useless DOI for my code. Sincerely, Yair
47458 update, revert openssl3+add wl chan skiplist
gonna use the old commit message since mainly it's a reversion back to openssl 1.1.1 thankfully and i've also added a wireless channel skiplist for each radio. enter the channels you'd like skipped separated by a comma or semicolon and the autochannel scan will ignore them.
this will make autochannel work nicer since many devices can't connect to DFS channels.
if you ever ditch your friends the way these guys did for that snazzy startup JAB in silicon valley, just follow the plan above to make amends:
The school gym is gutted by the fire incited by founders of the Washington Redskins Startup, with their graffiti "I'M OUT PEACE" and "SO LONG SUKKAS" scrawled on the walls that reflected their confidence at the time. The fourth graders are inside during their PE period. The girls jump rope, the boys try to make baskets through a badly damaged hoop. Other boys toss basketballs at each other, while Stan, Kyle, Cartman, and Kenny sit on the burnt bleachers.
Kyle I don't know what we're going to do? It's been like four hours and people still won't talk to us. Kenny (Right. What the fuck is going on?) Cartman You know what we gotta do, guys? [gets off the bleachers] We've gotta throw a big fuckin' party Kenny (A party?!) Cartman Yeah! How do you make everyone like you? You have a big party and invite everyone and then everyone thinks you're cool! Kyle Dude, that would have to be like, the best party ever. Cartman Well I'm down. Between the four of us we can throw the sweetest party ever, and these assholes won't even remember us being dicks to them. Kyle [joins him on the floor] Hey, that might work. But it can't be a party for us. Cartman Right, it's gotta be an awesome party for... Stan [joins them on the floor] For someone that we love who needs us and that we refuse to bail on! Cartman What? Kyle No no, he's right! We've gotta make it for someone in need so that people have to go. Cartman We lure people in with a cause and then hit 'em over the head with the best party ever. We're gonna have pizza and cake and a sweet band!
That was my first programming experience ever and it was amazing but code is awful)
random updates
- Added the new logo bumpin, and made it's background kinda sassy.
- Added the funni rickroll chart. Also made it's gameover funni. Also also fixed it's game over softlock
- Added the new week selection asset.
- *yro'ue
- Added the missing recolors (combo numbers).
- Made lessgoooo louder.
- Made Midna's game over voicelines louder.
- Changed something to Project.xml. Added HXCPP for debugging. Removed polymod because it sucks, unused and crashes my build >:(
- annoying ke build watermark stuff can now be hidden with the Watermark option
the ghost tapping crash will be fixed soon
Fuckin... Uhhh Parrying Bullets and shit dude?
So uh like fuckin uhm uh yeah like parrying bullets like fuckin totally makes them at least reverse direction and shit yeah uh still haven't got the bullets stun enemies yet though yeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeup.