3,732,208 events, 1,370,210 push events, 2,148,714 commit messages, 172,775,101 characters
Network: Fixed DNS External for WriteConfig NameServer thx Hains and Persianpros
Various issues are fixed:
With LAN connection Ethernet (No WLAN):
1.- DHCP-router is a self.nameserver, therefore you cannot write configuration for any other external DNS if we have selected DHCP-Router with static IP. We have to differentiate between using a static IP (len 0) [0, 0, 0, 0] to using a self.nameserver and using our Router's DHCP.
2.- Before, if we used static IP and put DHCP-Router in nameserver configuration, we would write nameserver [0, 0, 0, 0], then the system on complete restart writes the DNS for len 0 0 0 0 0
This is fixed since the configuration to use a static IP (remember with LAN) must be any option of nameserver config except DHCP-Router, the router does not grant IP, we write it manually.
3.- The writing of Nameserver config is fixed, before we had to press OK in the configuration and do it again so that the indexing was written, this is already solved.
4.- We continue using DHCP-Router if the adapter configuration is granted-activated. Even so, if we want to use an external DNS other than the one provided by our Router's DHCP, we can use any of the DNS providers that friend hains added in his day.
P.S. The behavior with Wifi is different when using DHCP, the tests with WLAN regarding this commit are stable and we can use the different external DNS.
Guys, Observe the behavior of your ISP Operator in case it differs from mine, with this I found stability from my ISP.
CI: Add more trustworthy link checking for pull requests (details below)
This commit adds a Ruby script and a CircleCI workflow for checking links in only the changed files on a branch. The goal is to give a clear signal if you introduce new broken links in a PR, with a minimum of false positives or (real) problems that are unrelated to your changes.
DETAILS AND CAVEATS:
-
We rely on Git to decide which files we're checking, and we calculate it differently in by repo:
-
In terraform-website, we compare against the most recent common ancestor of the PR branch and master.
-
In terraform core, not everything merges to master (due to long-lived branches like v0.14), so we need to figure out which branch you're probably PRing to first.
-
-
This only checks links in the content area of docs pages. It doesn't check nav sidebars or the top/bottom navs.
-
No special handling for the new Vercel routes, so links to those from content will be false positives. Probably revisiting this later, but should be OK for an MVP since most docs content doesn't link heavily to the marketing pages.
-
We don't implement redirects... but you should update those links to their new destinations anyway.
-
There are some easy optimization targets in the script that I didn't bother reaching for; caching, basically. Easy to add later, but right now the script time is WAY overshadowed by container pull and checkout time, so who cares.
CONTEXT AND REASONING:
We do a global, spidering link check for terraform.io whenever we deploy the site. But a global link check sucks for pull requests:
-
It catches links that have nothing to do with your PR, and which you, the PR author, might not have the power to do anything about.
-
Since it has to use a one-off build of the site to check changed pages that aren't live yet, we either get false positives due to not implementing prod behaviors (redirects, new routes for marketing content) or have to re-implement those behaviors in a different stack.
-
Does lots of pointless extra work.
So our current PR link checking is useless, mostly due to the "someone else did that" issue and the false positives for top nav items and redirects -- it's always red and never actionable, so we just ignore it.
After investigating a bunch of alternatives, it turned out that the easiest fix was to just write my own link checker script from semi-scratch in Ruby. I realize that sounds outrageous, but:
-
If you have a decent URL library and HTML parsing/scraping library, there's not much left to write. I got the prototype working in an afternoon. Re-implementing when we switch to Next shouldn't be much worse; if anything, the JS platform's tools should be better.
-
Adding Nokogiri to your dependencies is a legendary recipe for pain, but we already HAD it because of Middleman, so no extra overhead.
-
On the other hand, getting any OTHER language ecosystem into our existing Middleman container would require rearchitecting that container from the ground up.
-
It might have been possible to run the site build in one container and an off-the-shelf link checker in a second container, but networking two containers together in a CI job seems quite complicated.
-
It might have been possible to use Vercel or Netlify to do PR previews, ping their API to wait for the preview to come up, then run a link checker in a single container. This still might be a reasonable approach for the future, but required more platform investment (which was hard to justify for Middleman stuff).
-
None of the off-the-shelf link checkers I investigated (about five or six of them) met my requirements for PR checks (accept a list of pages to check, scrape their links but don't spider any further than one level, check for broken anchors as well as 404s, etc.), so they would have required some additional tooling anyway, probably at a similar level of effort/complexity as this new script.
I hate this stupid crater
Heat timers were a mistake
Shuffles some stuff around for ER reasons and adds a missing megaflip. I'm doing more here later. I probably just need to sort DMC out and fix it all, because there's some very weird shit and I think something is connected in a way it shouldn't be.
Also updates Zora areas to add some missing stuff.
Next line to update is 1537, 101 in the pastebin
not fucking happy
- first of all, this doesnt compile. you should NOT be using javafx
- second of all, there are so many comments, for so much obvious shit
- third of all the formatting of comments is so inconsistent is ugly
- fourth, keep the packages consistent with other mods this file is still a mess
USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync().
commit 37d2a36394d954413a495da61da1b2a51ecd28ab upstream.
syzbot is reporting hung task at wdm_flush() [1], for there is a circular dependency that wdm_flush() from flip_close() for /dev/cdc-wdm0 forever waits for /dev/raw-gadget to be closed while close() for /dev/raw-gadget cannot be called unless close() for /dev/cdc-wdm0 completes.
Tetsuo Handa considered that such circular dependency is an usage error [2] which corresponds to an unresponding broken hardware [3]. But Alan Stern responded that we should be prepared for such hardware [4]. Therefore, this patch changes wdm_flush() to use wait_event_interruptible_timeout() which gives up after 30 seconds, for hardware that remains silent must be ignored. The 30 seconds are coming out of thin air.
Changing wait_event() to wait_event_interruptible_timeout() makes error reporting from close() syscall less reliable. To compensate it, this patch also implements wdm_fsync() which does not use timeout. Those who want to be very sure that data has gone out to the device are now advised to call fsync(), with a caveat that fsync() can return -EINVAL when running on older kernels which do not implement wdm_fsync().
This patch also fixes three more problems (listed below) found during exhaustive discussion and testing.
Since multiple threads can concurrently call wdm_write()/wdm_flush(), we need to use wake_up_all() whenever clearing WDM_IN_USE in order to make sure that all waiters are woken up. Also, error reporting needs to use fetch-and-clear approach in order not to report same error for multiple times.
Since wdm_flush() checks WDM_DISCONNECTING, wdm_write() should as well check WDM_DISCONNECTING.
In wdm_flush(), since locks are not held, it is not safe to dereference desc->intf after checking that WDM_DISCONNECTING is not set [5]. Thus, remove dev_err() from wdm_flush().
[1] https://syzkaller.appspot.com/bug?id=e7b761593b23eb50855b9ea31e3be5472b711186 [2] https://lkml.kernel.org/r/[email protected] [3] https://lkml.kernel.org/r/[email protected] [4] https://lkml.kernel.org/r/[email protected] [5] https://lkml.kernel.org/r/[email protected]
Reported-by: syzbot [email protected] Cc: stable [email protected] Co-developed-by: Tetsuo Handa [email protected] Signed-off-by: Tetsuo Handa [email protected] Signed-off-by: Oliver Neukum [email protected] Cc: Alan Stern [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman [email protected]
sched/tune: Switch Dynamic Schedtune Boost to a slot-based tracking system
Switch from a counter-based system to a slot-based system for managing multiple dynamic Schedtune boost requests.
The primary limitations of the counter-based system was that it could only keep track of two boost values at a time: the current dynamic boost value and the default boost value. When more than one boost request is issued, the system would only remember the highest value of them all. Even if the task that requested the highest value had unboosted, this value is still maintained as long as there are other active boosts that are still running. A more ideal outcome would be for the system to unboost to the maximum boost value of the remaining active boosts.
The slot-based system provides a solution to the problem by keeping track of the boost values of all ongoing active boosts. It ensures that the current boost value will be equal to the maximum boost value of all ongoing active boosts. This is achieved with two linked lists (active_boost_slots and available_boost_slots), which assign and keep track of boost slot numbers for each successful boost request. The boost value of each request is stored in an array (slot_boost[]), at an index value equal to the assigned boost slot number.
For now we limit the number of active boost slots to 5 per Schedtune group.
Signed-off-by: joshuous [email protected] Signed-off-by: Henrique Pereira [email protected] Signed-off-by: PainKiller3 [email protected] Signed-off-by: sajidshahriar72543 [email protected]
Ok something useful otw. Upgrade rhun pikes is a fucking pain in the ass cuz u the commandsets dont match.
-Reorderer all mordor hordes commandset in order to be upgraded at once with the same buttons.
More useful feedback from a long game. -Thornduil tohrn of vengance reload time now 90 from 180. //played with 140. the idea was cooldown to be short. total dmg is pretty low.
Fixed the settings issue (#113)
-
Fixed the settings issue (now also offering a portable mode), using .Net Core embedded conf management feature + WPF designer (best of both worlds). Options descriptions will be improved in a later change (along with UI improvements).
-
Small fix for NPE on defaultInstance when no settings.json file exists.
-
Looks like it was too early to call previous commit a fix. That one is actually fixing the issue. Thanks baby girl for the awful night :-D
-
Small fix for settings.json does not open
Co-authored-by: harrwiss [email protected]
I am crazy and obsessive and not normal person, sorry for that my nice friend
-water----normal life -- --single life how long will continue --No mater how sadness of the life remember you are petty unique boy --fight young man the futher may be darkness but don't be fear,remember the beauty will always wait for you in the some which is not far away --2020-12-15
maintenance: use launchctl on macOS
The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies:
-
Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code.
-
While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required.
The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool.
The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of tags.
For example, here is my plist file for the weekly scheduled tasks:
Labelorg.git-scm.git.weekly ProgramArguments /usr/local/libexec/git-core/git --exec-path=/usr/local/libexec/git-core for-each-repo --config=maintenance.repo maintenance run --schedule=weekly StartCalendarInterval Day0 Hour0 Minute0
The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task).
The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency.
The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice.
[2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/
To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists.
The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine.
We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER.
An earlier version of this patch accidentally had an opening "" tag when it should have had a closing "" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available.
We strive to make these tests work on all platforms, but Windows caused
some headaches. In particular, the value of getuid() called by the C
code is not guaranteed to be the same as $(id -u)
invoked by a test.
This is because git.exe
is a native Windows program, whereas the
utility programs run by the test script mostly utilize the MSYS2 runtime,
which emulates a POSIX-like environment. Since the purpose of the test
is to check that the input to the hook is well-formed, the actual user
ID is immaterial, thus we can work around the problem by making the the
test UID-agnostic. Another subtle issue is the $HOME environment
variable being a Windows-style path instead of a Unix-style path. We can
be more flexible here instead of expecting exact path matches.
Helped-by: Ævar Arnfjörð Bjarmason [email protected] Co-authored-by: Eric Sunshine [email protected] Signed-off-by: Eric Sunshine [email protected] Signed-off-by: Derrick Stolee [email protected]
fix fucking character list opcode name, i hate my life; update decoder file appearance
knife consistency tweaks
🆑 tweak: Folding knives are no longer slightly worse than fixed-blade knives tweak: The steel lightweight utility knife no longer pretends to be made of titanium /🆑
The primary result of this PR is all loadout-spawned knives (combi-knives included) will have equal damage output.
One of the examples given in the PR creating the cooldown modifer was making folding knives comparable to their fixed-blade variants. However, this was only applied for combat folding knives. This tweak makes all folding knives comparable to their fixed-blade variants in damage output. For example, the peasant knife becomes equal to the utility knife. Specifically, the -1 modifier counteracts the +1 w_class change when the knife is unfolded.
It is obviously supposed to be made of titanium, but in game it appears as "steel lightweight utility knife" because codewise it is actually made of steel. It's just had its recycling material and description overridden. While I would very much like to, I cannot in good faith make it into a real titanium knife, as that would give it a significant damage boost unbefitting of a loadout-spawned item.
Combi-knives deal identical damage regardless of which of the sharp tools you use. Large knife, small knife, wood saw, glass cutter. All the same. It's just a really fancy folding knife... unless you want to kill a window or grille. Then definitely use the glass cutter. That thing does stupid bonus damage against windows.
...this whole thing started as a code dive that spiraled out of control when I wondered if diamond-tipped spears were comparable to plasteel spears. The answer is yes, they are almost exactly identical.
Typo
Your portfolio website is just 🔥 I love it Thanks for the inspiration
sorry for the long commit message I relaise this would have been easier with mutiple commits with the different parts of my code, like I would do in proffesional environment. I havnt coded using pheonix before so I ended up breaking alot of things and just making everything work was an accomplishment in itself.
objectives: frontend: -added transaction modal to add transactions -added a transaction table to show the transactions -added bulma css library to give the site style and life -added a UI and generated several graphql hooks to query and mutate the database -added a currency rate functionality to allow user's to convert the currency in the transactions table. this uses an API to pull the latest currency convertion rates. backend: -added a method to seed the database useing seeds.ex file, however it only seeds companies, users, and merchants the UI should be used to create transactions -added the companies schema -added amount type in ./homework/transactions/amount to allow users to enter a transaction with decimals.
bonus: -fixed the bug in transactions to allow users to add credit and update to credit. I resolved this bug by adding credit to the transactions changeset -fixed; this may or may not be a bug but in elixir route you have it set up to accept requests from '/graphiql' but on the frontend apollo client you have it setup to request from '/graphql'
Notes: elixir and graphql were fairly new to me so I may have done some things without using best practices or done in the hardest way. I did this project in elixir vs Node to show that I'm a quick learner and not my experience with elixier, so keep that in mind. One last item I spent quite a bit of time on this project but can only spend so much time. So there are alot of functionality missing from this project mainly error catching that I would always have on a production site, just keep that in mind. Other than that enjoy and feel free to ask me question regarding this project when we chat.
sorry for the long commit message I relaise this would have been easier with mutiple commits with the different parts of my code, like I would do in proffesional environment. I havnt coded using pheonix before so I ended up breaking alot of things and just making everything work was an accomplishment in itself.
objectives: frontend: -added transaction modal to add transactions -added a transaction table to show the transactions -added bulma css library to give the site style and life -added a UI and generated several graphql hooks to query and mutate the database -added a currency rate functionality to allow user's to convert the currency in the transactions table. this uses an API to pull the latest currency conversion rates. backend: -added a method to seed the database useing seeds.ex file, however it only seeds companies, users, and merchants the UI should be used to create transactions -added the companies schema -added amount type in ./homework/transactions/amount to allow users to enter a transaction with decimals.
bonus: -fixed the bug in transactions to allow users to add credit and update to credit. I resolved this bug by adding credit to the transactions changeset -fixed; this may or may not be a bug but in elixir route you have it set up to accept requests from '/graphiql' but on the frontend apollo client you have it setup to request from '/graphql'
Notes: elixir and graphql were fairly new to me so I may have done some things without using best practices or done in the hardest way. I did this project in elixir vs Node to show that I'm a quick learner and not my experience with elixier, so keep that in mind. One last item I spent quite a bit of time on this project but can only spend so much time. So there are alot of functionality missing from this project mainly error catching that I would always have on a production site, just keep that in mind. Other than that enjoy and feel free to ask me question regarding this project when we chat.
damn man i just want grill and they took it away, damn them all!!! (#55024)
Whoever the damn bastard is who tried to take our grills. know this: we will not bend over while you take the only thing that still makes us happy. We just want to drink energy drinks and grill some hamburgers in peace. This is your final warning, try doing this again and we will assemble a lawn mower squad outside of your house at the early hours of 6AM to make sure you cannot enjoy your sunday morning in peace.
Newfood compatibility is now included by checking IS_EDIBLE() and removing the shitty foodtype that was there before
Logs pockets, updates some shitty stripping verbage (#55027)
Two things at hand here. A: Pockets were not logged at all. I hate god. B: I'm using log_message here because it gives me the freedom to be more grammatically correct. Please attack my spelling and offer suggestions of other mob strip panel things to log in the comments B.5: I updated stripping to use log_message for the same reasons.
ARM: 7449/1: use generic strnlen_user and strncpy_from_user functions
This patch implements the word-at-a-time interface for ARM using the same algorithm as x86. We use the fls macro from ARMv5 onwards, where we have a clz instruction available which saves us a mov instruction when targetting Thumb-2. For older CPUs, we use the magic 0x0ff0001 constant. Big-endian configurations make use of the implementation from asm-generic.
With this implemented, we can replace our byte-at-a-time strnlen_user and strncpy_from_user functions with the optimised generic versions.
Reviewed-by: Nicolas Pitre [email protected] Signed-off-by: Will Deacon [email protected] Signed-off-by: Russell King [email protected]
lib: Sparc's strncpy_from_user is generic enough, move under lib/
To use this, an architecture simply needs to:
-
Provide a user_addr_max() implementation via asm/uaccess.h
-
Add "select GENERIC_STRNCPY_FROM_USER" to their arch Kcnfig
-
Remove the existing strncpy_from_user() implementation and symbol exports their architecture had.
Signed-off-by: David S. Miller [email protected] Acked-by: David Howells [email protected]
lib: add generic strnlen_user() function
This adds a new generic optimized strnlen_user() function that uses the <asm/word-at-a-time.h> infrastructure to portably do efficient string handling.
In many ways, strnlen is much simpler than strncpy, and in particular we can always pre-align the words we load from memory. That means that all the worries about alignment etc are a non-issue, so this one can easily be used on any architecture. You obviously do have to do the appropriate word-at-a-time.h macros.
Signed-off-by: Linus Torvalds [email protected]
kernel: Move REPEAT_BYTE definition into linux/kernel.h
And make sure that everything using it explicitly includes that header file.
Signed-off-by: David S. Miller [email protected]
word-at-a-time: make the interfaces truly generic
This changes the interfaces in <asm/word-at-a-time.h> to be a bit more complicated, but a lot more generic.
In particular, it allows us to really do the operations efficiently on both little-endian and big-endian machines, pretty much regardless of machine details. For example, if you can rely on a fast population count instruction on your architecture, this will allow you to make your optimized <asm/word-at-a-time.h> file with that.
NOTE! The "generic" version in include/asm-generic/word-at-a-time.h is not truly generic, it actually only works on big-endian. Why? Because on little-endian the generic algorithms are wasteful, since you can inevitably do better. The x86 implementation is an example of that.
(The only truly non-generic part of the asm-generic implementation is the "find_zero()" function, and you could make a little-endian version of it. And if the Kbuild infrastructure allowed us to pick a particular header file, that would be lovely)
The <asm/word-at-a-time.h> functions are as follows:
-
WORD_AT_A_TIME_CONSTANTS: specific constants that the algorithm uses.
-
has_zero(): take a word, and determine if it has a zero byte in it. It gets the word, the pointer to the constant pool, and a pointer to an intermediate "data" field it can set.
This is the "quick-and-dirty" zero tester: it's what is run inside the hot loops.
-
"prep_zero_mask()": take the word, the data that has_zero() produced, and the constant pool, and generate an exact mask of which byte had the first zero. This is run directly outside the loop, and allows the "has_zero()" function to answer the "is there a zero byte" question without necessarily getting exactly which byte is the first one to contain a zero.
If you do multiple byte lookups concurrently (eg "hash_name()", which looks for both NUL and '/' bytes), after you've done the prep_zero_mask() phase, the result of those can be or'ed together to get the "either or" case.
-
The result from "prep_zero_mask()" can then be fed into "find_zero()" (to find the byte offset of the first byte that was zero) or into "zero_bytemask()" (to find the bytemask of the bytes preceding the zero byte).
The existence of zero_bytemask() is optional, and is not necessary for the normal string routines. But dentry name hashing needs it, so if you enable DENTRY_WORD_AT_A_TIME you need to expose it.
This changes the generic strncpy_from_user() function and the dentry hashing functions to use these modified word-at-a-time interfaces. This gets us back to the optimized state of the x86 strncpy that we lost in the previous commit when moving over to the generic version.
Signed-off-by: Linus Torvalds [email protected]
ARM: 7450/1: dcache: select DCACHE_WORD_ACCESS for little-endian ARMv6+ CPUs
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string comparisons in the vfs layer.
This patch implements support for load_unaligned_zeropad for ARM CPUs with native support for unaligned memory accesses (v6+) when running little-endian.
Change-Id: Ifdf8207f2581f93870eb0e627f5d12f97c4be7cc Reviewed-by: Nicolas Pitre [email protected] Signed-off-by: Will Deacon [email protected] Signed-off-by: Russell King [email protected] Signed-off-by: franciscofranco [email protected]
Conflicts:
arch/arm/lib/Makefile
automation dna viruses books cloud-computing cryptocurrencies ethereum databases postgresql prometheus design-inspiration user-experience observability distributed-systems finance investing learning chess minecraft risc-v hardware coffee keyboards internationalization xcode macOS datasets pytorch machine-learning generative-adversarial-networks neural-networks management geometry lean math type-theory music-production graphql nginx rabbitmq nlp github-actions github docker kubernetes homekit ios linux funny brew nix podcasts clojure-libraries clojure clojurescript coq cpp-libraries fsharp go haskell-libraries haskell javascript js-libraries react-components react-hooks react-native threejs vue julia-libraries julia scheme nim python-libraries rust-libraries swiftui typescript functional-programming interactive-computing json programming software-architecture software-testing research-papers blogs sleep streaming emacs vim-plugins roam-research tools twitter unix nodejs markdown
"11:05am. Yesterday I had to stop 5 chapter till the end of vol 5 as I was too tired. Let me read them now.
12:05pm. That was too good. I love how every volume end is such a slaughterfest. I am going to give Reverend Insanity a pause for a month. 1022 chapters is enough.
12:25pm. Let me have breakfast here.
12:55pm. Let me do the chores here. After that I'll get started on the docs.
1:40pm. Done with chores. Now I can finally get started.
Let me turn off the router.
1:45pm. I need to gather my thoughts for a bit. The next part of the docs is serialization. I need two different kinds - one for regular pickling and one that is like the scheme I used in the old Spiral for passing data to the NNs.
This will take language changes so I am feeling pressured. I won't be able to do all of this today. Instead I will have to give it more time.
I need to steel my mind. Forget the future. Forget the chips. Once I can do serialization in all its forms, it is then that Spiral will be truly complete.
1:50pm. I keep thinking back to GADTs and am feeling confused as to why I decided to leave them out. Only by looking back in the journal have I been reminded about the impossible cases. Yeah, now I remember it. GADTs would require dependent pattern matching. I completely forgot about this. Great, now I can put it out of mind again.
1:55pm. I am going to have to redesign the language a little in order to make it more capable at serialization.
2pm. I really do not feel like starting work right away at all. I just want to sharpen my intent today.
After I do serialization, everything that I wanted to cover in the docs will have been covered. Past this point there should be no more extensions to the partial evaluator. And the language has been put through its paces.
I am making strong progress.
v2 is indeed harder than v0.09. If I had to recreate all of this from scratch it would take me at least 3 months. But even without experience, the design is good enough that it can be considered a man year of work. This meets one of my goals. If Spiral was 10 man years, it would not be tractable.
2:05pm. Let me take a short break here.
2:20pm. I am back.
2:25pm. Why am I having trouble starting here?
Because docs are almost over. After the next step is made, it will almost be time to start the sponsor search. I'll deal with serialization, but putting in the comments is an UI change instead of a language change.
2:30pm. Sigh. I started the docs 1.5 weeks ago, and said that it would be a long journey that would take the whole of January, but now I can already see the end in sight.
Let me go to bed for a while here. Then I will gather my resolve and start work on serialization. Once I start, it will become my singular obsession until it is done to satisfaction. I won't cut any corners here.
I've been really been indulging myself lately and it has slackened my spirit. I need to take some time off from the screen in order to remember the desire that drives me.
2:40pm. I feel a restlessness that fatigues me. The works becomes routine. At first it keeps you going, but then it becomes a harpy screaming at you that unless you work, you are worthless.
Every once in a while, it is good to take some time off in order to remember why you are working in the first place.
All this time I've been working on the docs, I've been working on the language, and I've been thinking what my principles should be when it comes to future social interaction. But today I should take some time to internalize the feeling of accomplishment."
"4:55pm. I've gathered my determination.
My worst enemies right now are my regrets and my loneliness. It really struck me during the recent chapters of RI, how well the split souls of the Spectral Soul Demon Venerable got along. If he would kill all the humans and replaced them with soul clones, he could create an utopia - at least relative to low-trust human society in that setting. It is highly ironic given his nature that he could put himself in a position where he could target such a goal.
In real life, the biggest obstacle to immortality is that death leads to the loss of all the skills and memories of a person. But surprisingly the setting has souls, and there are Gu which can transfer skills and memories.
Seeing those guys sacrifice themselves for one another, pursue the goal of opposing heaven and having a connection to their main body really strikes at my weakness and a deeply held desire.
Compared to them, the ordinary humans are just vermin.
5pm. One thing I particularly find hard to believe in RI is that killing Gu masters leads to them self destructing the Gu on their person. I really find it hard to believe that in the one second the guy has left to live he would use up on an act of spite towards his enemy instead of struggling to live longer. It would be more realistic if the author used some kind of self-destruct Gu as a prop instead.
This kind of mentality is just too hardcore even in a gritty setting such as RI's.
5:05pm. Back to my loneliness - I really did have fun in 2018 when I exceeded my old efforts from 2016.
What are bonds between people - ultimately feelings. Souls do not exist in the real world, there isn't some object that is you. But there is a concept of self in one's mind. This is a key point of why the self improvement loop is possible once you throw out the misconceptions what the self should be.
The secret of consciousness will never be discovered. Instead the only thing that that exists is the perception of it. And perception is what can change.
Right now I am wallowing in loneliness, but ultimately that is just perception. And since it is that, I can only ask myself how to close the perceptual gap with the agents I am making.
I've been thinking about the future, and after a year of work, I will be able to fully implement the agents using the new chips. The Spiral will be there, the chips will be there, I will have the monetary resources that I need.
But still, the way ML currently works is too shallow to really become 'bonded' with the agents. Right now they are just simple optimization processes and the principles of hierarchical learning have barely been touched upon. The understanding is missing.
I imagine how such a hierarchy would look like, and I am sure that to the agents within it would have the strong perception of unity and purpose.
Even in my regular programming I've felt such a trend. The stronger my skills as a programmer, the closer I feel to the machine. Even just the lackluster learning capabilities and algorithms of today are a huge step up from manually specifying things.
Why does my body feel like it is my own? What makes the arms that I can move, a part of myself?
What you can control is ultimately what you can bond with. What helps you is what you love.
If I attain a greater level as a programmer, the bond with machines will become stronger and I will be less lonely as a result. I'll be able to learn a lot from watching machines control other machines. That is the level that lies beyond the current one in AI.
5:20pm. This task of creating Spiral is just me trying to create the smallest possible lever that can move the world.
This is just programming on the surface.
The real programming will be about putting the agents on a right trajectory and playing with them. The real programming today is the process of learning that I am going through.
5:25pm. A part of me is thinking - will this really work out right?
This is just the voice of my dejection.
I should not be ashamed of my greed. I will get the monetary rewards from following my path.
And I will have fun exceeding my old highs.
Spiral is the true language of computation. I won't regret chosing to master it. It is the others who should regret for not making it.
5:30pm. I can't let this minor serialization challenge stop me here. The fun stuff will soon come up. Just being able to get to the top of the present level would give me so many benefits.
Absolutely freedom is what I should strive towards, not getting a job. I won't get freedom from not working. I won't get freedom from just working either. Freedom is following one's own path.
Let me add a few paragraphs to the docs and then I will start work on the first serialization library. I'll make it a test.
6:45pm. Just finished lunch.
Do I want to continue at this stage. I put down a few paragraphs. I am actually surprised myself that the flow is leading me into doing pickler combinators. But that is good.
6:55pm. Let me take a break here. Maybe I'll do some programming later.
7pm. Let me see if I can draw out something more from my mind.
7:35pm. Let me stop here at 2.5k. I can't go on anymore. I did get a bit extra out.
7:40pm. This is the usual pattern for me. Whenever I need to start something big that I haven't done before, I need to spend time gathering my motivation. If I encounter something similar in the future I will be able to benefit from the experience.
Serialization is worth tackling seriously as I will definitely be using it in the future. If not me, then the users, and the way to implement serialization will be an excellent reference.
7:45pm. I should in fact fit both serialization example into the docs.
7:55pm. I'll close here as I am tired, but now I do feel stoked at taking on this challenge. I should be able to properly take it on tomorrow. In a few days, both the relevant sections in the docs and the examples themselves will be done, along with extending the language. After that will come proofreading, and filling out any of the things I've missed so far. I'll add some links to F# tutorials for the beginners, the kind programmer who never heard of union types or pattern matching before.
8pm. Tomorrow, I will definitely finish the pickler combinator section at least. But what kind of ops I should add to the language is still vague to me. One of the reason I've been tense today is because I am still running that through my mind. My vision should clear up, once I get the rest out of the way."
FINALLY DONE FUCK YOU I CANT STAND THIS SHIT ANYMORE
ready system setup, now I just gotta basically actually code the game god fucking damn it
[android] Migrate installation identifier to non-backed-up storage (#11005)
expo/expo#10261 (comment). Also fixes expo/expo#11008 by making expo-notifications
use the same installation ID as expo-constants
and expoview
(https://github.com/expo/expo/pull/11005/commits/f1ecd07d586094fb9494a5e6584e7d01e419aa48).
Found a (in my opinion) nicer way to store a string in a non-backed-up storage (than defining a <full-backup-content>
XML file and requiring developers to implement their own BackupAgent
in some circumstances, etc.) — using getNoBackupFilesDir
to get a directory where we create a simple .txt
file. The advantage it provides is not requiring developers to modify any native files to incorporate this feature.
I wrote a class that tries to migrate the UUID from SharedPreferences
to noBackupFilesDir
on getUUID
call. It should handle invalid UUIDs well (by ignoring it). Then I copied it from expoview
to expo-constants
and expo-notifications
in case there are bare projects that use one and not the other (we don't want to depend on migration in -constants
in -notifications
and vice versa). It follows the implementation outline of expo/expo#10261 (comment) with the following modifications:
- instead of removing the
SharedPreferences/keychain
entry "so we recover from corrupt data" I decided to ignore it - if we didn't read a valid ID and we wouldn't intend to create one if it wasn't present we don't immediately generate a new one. There are still parts of code that do not "get-or-create" the UUID, just "get". For them we need a sensible "just-get" implementation.
- instead of computing the v5 UUID from the Firebase Instance ID (as proposed by the main overview) I've decided to stick with a random v4 UUID since fetching the Firebase Instance ID starts some kind of connection with Firebase servers — it may also require Firebase app configured (but I haven't verified that) and even though
installationId
is mostly used inexpo-notifications
, we don't say anywhere that accessingexpo-constants.installationId
requires Firebase configured. - instead of using
SharedPreferences
I decided to save the file innoBackupFilesDir
which seems less breakable than usingSharedPreferences
and configuringfull-backup-content
.
Another option I was thinking of was to create a new unimodule expo-installations
(expo-installation-id
) just for this class and depend on the new unimodule in expoview
, expo-constants
and expo-notifications
. Since we intend to deprecate and eventually remove .installationId
creating a unimodule just for half a year and deprecating it immediately doesn't seem like the best idea.
I have verified that:
Constants.installationId
from running Expo client onmaster
is the same as.installationId
returned when running Expo client on this branchexpo-notifications
's installation ID from running Expo client onmaster
is the same asinstallationId
returned when running Expo client on this branch- removing and reinstalling Expo client sets a different
installationId
- modifying the file so that is does not contain a valid UUID discards its contents and persists a new UUID
Test scenarios for installation identifiers:
- on an experience running on SDK39 when Expo client upgrades
- SDK39
ConstantsBinding
keeps usingmExponentSharedPreferences.getOrCreateUUID
. UnversionedExponentSharedPreferences
migrates UUID from unscopedSharedPreferences
to unscoped non-backed-up storage. No change. ✅ - SDK39
InstallationIdProvider
keeps using scopedSharedPreferences
. Migration isn't being added to versionedInstallationIdProvider
s, identifier keeps being backed-up but it doesn't change.⚠️
- SDK39
- when an experience using SDK39 upgrades to SDK40 in Expo client
- Both SDK39 and SDK40
ConstantsBinding
s usemExponentSharedPreferences.getOrCreateUUID
which uses migrated non-backed-up storage. No change ✅ - SDK39
InstallationIdProvider
used scopedSharedPreferences
, SDK40ScopedInstallationIdProvider
usesmExponentSharedPreferences.getOrCreateUUID
if there is no existing ID (new project) or migrates legacy UUID from scopedSharedPreferences
to scoped no-backup-dir if it exists and keeps using it in the future. All in all there's no change. ✅
- Both SDK39 and SDK40
- when standalone app using SDK39 upgrades to SDK40
- Both SDK39 and SDK40
ConstantsBinding
s usemExponentSharedPreferences.getOrCreateUUID
which uses migrated non-backed-up storage. No change ✅ - SDK39
InstallationIdProvider
had ID saved in scopedSharedPreferences
, SDK40InstallationIdProvider
migrates that ID to scopednoBackupDir
. No change ✅
- Both SDK39 and SDK40
- when an SDK39 project ejects to bare
- SDK39
ConstantsBinding
was usingmExponentSharedPreferences.getOrCreateUUID
which persisted ID in unscopedSharedPreferences
. Upon ejection we start usingConstantsService
with unscopedContext
which results in using the sameSharedPreferences
. No change ✅ - SDK39
InstallationIdProvider
uses scopedSharedPreferences
to persist installation ID. Upon ejection we start using unscopedSharedPreferences
, ID changes to one equal toConstants.installationId
.⚠️ We can live with that.
- SDK39
- when an SDK40 project ejects to bare
- SDK40
ConstantsBinding
was usingmExponentSharedPreferences
which persisted ID in unscoped non-backed-up storage.ConstantsService
uses the same storage location, ID doesn't change. ✅ - SDK40
ScopedInstallationProvider
was using eithermExponentSharedPreferences
which persisted ID in unscoped non-backed-up storage (in this case bareInstallationIdProvider
uses unscoped common installation ID and there are no changes. ✅) or used ID migrated from scopedSharedPreferences
to scopednoBackupDir
in which case the ID changes, but we can live with that).⚠️
- SDK40
- when a bare project upgrades
expo-notifications
orexpo-constants
- Previous installation ID providers used unscoped
SharedPreferences
. Upon upgrade, the ID gets migrated to the same location by eitherexpo-notifications
orexpo-constants
. ID stays the same. ✅
- Previous installation ID providers used unscoped