Skip to content

Latest commit

 

History

History
1174 lines (830 loc) · 72.4 KB

2021-10-02.md

File metadata and controls

1174 lines (830 loc) · 72.4 KB

< 2021-10-02 >

2,323,064 events, 1,334,145 push events, 1,949,112 commit messages, 117,949,550 characters

Saturday 2021-10-02 01:00:46 by Pete Batard

[grub] add yet another frigging patch to GRUB "2.04"

  • GRUB 2.0 maintainer think they're doing a fine job, even when there are CRITICAL SECURITY FIXES that should warrant an immediate out of bound release, and instead consider that waiting MONTHS or YEARS to release anything is not a big deal at all.
  • Ergo, distros, such as Ubuntu, start to pick whatever security patches they see fit, since they can simply not RELY on the upstream project to produce security releases in a timely manner. One such patch is: https://lists.gnu.org/archive/html/grub-devel/2021-03/msg00012.html
  • But since there is no new GRUB release per se, they still call their GRUB version, onto which they applied patches that have come into existence more than 2 years after the actual 2.04 release, "GRUB 2.04".
  • Obviously, since GRUB 2.04 + literally hundreds of cherry picked patches does deviate a lot from the last release, THINGS BREAK IN SPECTACULAR FASHION, such as the recently released Ubuntu 21.04 failing to boot with the error: grub_register_command_lockdown not found.
  • Oh, and of course, regardless of all the above, if you ask anyone, they'll tell you that there's nothing fundamentally wrong with the GRUB release process (even if they should long have released 2.05, 2.05-1 and 2.05-2, were their maintainer ready to acknowledge that delaying releases DOES CREATES MAJOR ISSUES DOWSTREAM, as many people REPEATEDLY pointed to them on the GRUB mailing list) or with the Ubuntu GRUB versioning process (that really shouldn't be calling their version of GRUB "grub-2.04" but instead something like "grub-2.04_ubuntu"). Oh no siree! Instead, the problem must all be with Rufus and its maintainer, who should either spend their lives pre-emptively figuring which breaking patch every other distro applied out there, or limit media creation to DD mode, like any "sensible" person would do, since DD mode is the ultimate panacea (Narrator: "It wasn't").
  • So, once again, a massive thanks to all the people who have been involved in the current GRUB 2.0 shit show, whose DIRECT result is to make end users' lives miserable, while GRUB maintainers are hell bent on continuing to pretend that everything's just peachy and are busy patting themselves on the back on account that "Fedora recently dropped more than 100 of the custom patches they had to apply to their GRUB fork" (sic). Nothing to see here, it's just GRUB maintainer's Jedi business as usual. Besides, who the hell cares about Windows users trying to transition to Linux in a friendly manner anyway. I mean, as long as something doesn't affect existing Linux users, it isn't a REAL problem, right?...

Saturday 2021-10-02 08:57:23 by Lorenz Dirry

PAINTROID-161 added catrobat-image and refactored commands (#894)

So I decided to go with the Kryo (https://github.com/EsotericSoftware/kryo) Library for Serialization since it was the library that according to different benchmarks and articles performed the best in terms of output size (e.g. https://github.com/eishay/jvm-serializers/wiki). Also it was recommended on multiple stackoverflow posts.

I also wrote custom Serializers for every object that gets serialized, mainly to ensure backwards compatibility and minimum file size. The idea is that a Serializer specifies how an object should be written and read from a file. Without custom Serializers you would usually experience some limitations like for instance not being able to change the type of a member variable, if you wanted to keep backwards compatibility. All these custom Serializers inherit from the class VersionSerializer. With this class we can specify how we want to read an object from different Catrobat image file versions.

For example: If we wanted to introduce a new feature with 3D Coordinates, then our PointCommand would not be able to take the PointF parameter anymore and would instead need a class that can store 3 coordinates, lets call it Point3F. All we know have to do is, increment the field CommandSerializationUtilities.CURRENT_IMAGE_VERSION and in the class PointCommandSerializer, we override the function readV1(..) and specify in this function how we convert the read PointF object from the old image version to the new Point3F. After that we just return a new PointCommand with the new Point3F as parameter. Also in the write(..) function we specify that we want to write a Point3F instead and in the readCurrentVersion(..) we read the Point3F.

If we want to introduce a new Command then we don't even have to increment the CURRENT_IMAGE_VERSION. Just create a new Serializer similar to the other ones and register it in the function CommandSerializationUtilities.setRegisterMapVersion(). IMPORTANT: (I also added a comment in code), Never change the order of these and only add/register new Serializers at the end of this function. Basically kryo assigns an ID to each class, this way it doesn't have to write/store the full package+name of each class.

Forwards compatibility is not supported (and would result in an image couldn't be loaded dialog), else we would also need to specify what happens when writing an object for every version. (which in my opinion is not worth the effort since users should just update to the newest version on the play store)

I also had to exchange the android.graphics.Path with a new class SerializablePath (inherits from Path). This class basically records/remembers all important function calls to this object. Else there was no option to get this information for serialization because android.graphics.Path seems to use some sort of registry system to maybe store some data more efficently for reuse.

Added following tests:

  • CatrobatImageIOIntegrationTest.kt (to test the process of writing/reading a catrobat-image)
  • CommandSerializationTest.kt (to test that every command is serialized and read as intended)

Also refactored all the Commands to Kotlin and added testcases for the commands which didn't have any before. Just some thoughts on what I changed when refactoring the Commands to Kotlin.

  • when calling run(canvas: Canvas, layerModel: LayerContracts.Model), the canvas and layermodel should and are currently never null (because I had to remove a test that checked for null)
  • completly refactored the StampCommand and removed BaseCommand, since the StampCommand was the only class extending it, and the BaseCommand had only functions that were useful for the StampCommand (like storing bitmap) (all commands inherit from the Command Interface still)

Your checklist for this pull request

Please review the contributing guidelines and wiki pages of this repository.

  • Include the name of the Jira ticket in the PR’s title
  • Include a summary of the changes plus the relevant context
  • Choose the proper base branch (develop)
  • Confirm that the changes follow the project’s coding guidelines
  • Verify that the changes generate no compiler or linter warnings
  • Perform a self-review of the changes
  • Verify to commit no other files than the intentionally changed ones
  • Include reasonable and readable tests verifying the added or changed behavior
  • Confirm that new and existing unit tests pass locally
  • Check that the commits’ message style matches the project’s guideline
  • Stick to the project’s gitflow workflow
  • Verify that your changes do not have any conflicts with the base branch
  • After the PR, verify that all CI checks have passed
  • Post a message in the #paintroid Slack channel and ask for a code reviewer

Saturday 2021-10-02 09:14:24 by darkhz

mm: process_reclaim: Modified the driver to kill processes.

This is marked HIGHLY EXPERIMENTAL. Use at your own risk.

If you want to test this, ZRAM/SWAP must be enabled!

Background

While playing around with various low memory solutions on Android, like the stock Android LowMemoryKiller and Sultan's Simple LMK, I was unsatisfied with the way with which they handled low memory.

Here are my reasons,but I may be wrong/unclear.Please do feel free to correct me.

  • Android LMK is too aggressive,but setting minfrees/adjs help to an extent.During my testing,I found that LMK depletes swap-pages to 0, and a thrashing situation ensues.When this thrashing situation continues for a long time,OOM Killer comes into play,and starts killing tasks at random.This is undesirable.

  • Sultan's SLMK is a bit better,but still aggressive,in the sense that it starts to kill tasks even if there is a short memory spike.This is undesirable, as there may be a situation wherein many swap pages are available,but since tasks are being killed when those memory spikes occur,swap pages may not be used as much as they are freed.These kind of situations are frequent when using SLMK during my testing.

In short,I was unable to keep many apps open as per the potential of my device's RAM and ZRAM.The above-mentioned solutions also killed Android services at random, which proved to be annoying, for example when I had a music player and apps in the background,the music player would abruptly be killed.

My objective is to:

  • Keep important tasks like Android services running until they are closed by the user.

  • Kill apps as little as possible,or specifically,killing apps based on the total time spent on it.This means that, apps with larger usage time get killed more rarely,and apps with lesser usage times get killed frequently.

Solution

This is my humble(flawed?) attempt at handling a low-memory situation in a better manner.

I used the Process Reclaim driver as the base for this purpose. The reason why I used this driver was that,its function was to reclaim ANONYMOUS(MM_ANON) pages from each scanned task as much as possible(according to per_swap_size), so that apps never get killed,and memory gets reclaimed in the end.

In theory,this sounds good.While testing though,I ran into a very serious problem: system freezes.Too much memory pressure and less ANONYMOUS pages to reclaim rendered the system unusable for a while,which was NOT cool.

Therefore,the only way to prevent these system freezes was to kill some tasks,but at the appropriate time.This is the approach I took:

  • We keep reclaiming pages if they are above defined threshold (free_swap_limit).

  • Once the swap goes below a percentage(free_percent) above/below the threshold, we start to collect tasks to kill.

  • We then sort those tasks based on their last accessed (stime+utime), and then start killing the tasks with the lowest (stime+utime).We stop killing tasks once the number of swap pages are above the threshold (free_swap_limit). I specifically sorted according to acct_timexpd,see kernel/tsacct.c for more details(CONFIG_TASK_XACCT).

So, in theory, this should reclaim ANONYMOUS pages more often, and kill tasks less,depending on the memory pressure.

Other Changes

  • I've renamed some variables in the driver, so that their values don't get reset by the values set by the ROM, specifically, init.qcom.post_boot.sh

  • Removed min_score_adj, I've now added a blacklist of OOM_SCORE_ADJ's so that the ones in the blacklist don't get their MM_ANON pages reclaimed.This is important,because some running services/apps may glitch(this happens rarely).For example, while playing music, I experienced crackling sounds at random intervals.

  • Added functionality to wake up oom_reaper after killing a task,so that once the task is killed,all its pages(?) will be reaped. (Taken from the lowmemorykiller.c code)

  • Added some code bits from SLMK(marked in the comments).

  • Removed traces from the driver.

Credits

Minchan Kim and Vinayak Menon for the Process Reclaim driver, and Sultan AlSawaf(kerneltoast) for the appropriate code bits.

Signed-off-by: darkhz [email protected] Signed-off-by: Alex Winkowski [email protected]


Saturday 2021-10-02 09:21:07 by esainane

HFR: Purge damage/heal magic numbers using MATH

Everything is now defined in terms of human readable constants. No mechanical changes.

Things that technically changed:

  • The overfull function was technically a very fine grained step function due to the rounding function, so technically you only jumped 0.0125 points of damage per tick per every whole mole of mass above. We now use a smooth function, because oh my god why bother

Things that might look like they were changed:

  • Subcritical healing. Healing only happened if the fusion mass was under 1200 anyway, so there's no point in using a higher threshold.

Things that probably should be later changed:

  • Alternate || conditions when nothing happens if the first condition isn't met
  • HYPERTORUS_MAX_MOLE_DAMAGE was used as a minimum, this probably wasn't intentional

Saturday 2021-10-02 10:23:30 by Shanmuga Ganesh T

initial commit

The demons had captured the princess and imprisoned her in the bottom-right corner of a dungeon. The dungeon consists of m x n rooms laid out in a 2D grid. Our valiant knight was initially positioned in the top-left room and must fight his way through dungeon to rescue the princess.

The knight has an initial health point represented by a positive integer. If at any point his health point drops to 0 or below, he dies immediately.

Some of the rooms are guarded by demons (represented by negative integers), so the knight loses health upon entering these rooms; other rooms are either empty (represented as 0) or contain magic orbs that increase the knight's health (represented by positive integers).

To reach the princess as quickly as possible, the knight decides to move only rightward or downward in each step.

Return the knight's minimum initial health so that he can rescue the princess.

Note that any room can contain threats or power-ups, even the first room the knight enters and the bottom-right room where the princess is imprisoned.

Example 1: https://assets.leetcode.com/uploads/2021/03/13/dungeon-grid-1.jpg Input: dungeon = [[-2,-3,3],[-5,-10,1],[10,30,-5]] Output: 7 Explanation: The initial health of the knight must be at least 7 if he follows the optimal path: RIGHT-> RIGHT -> DOWN -> DOWN.

Example 2:

Input: dungeon = [[0]] Output: 1

Constraints:

m == dungeon.length
n == dungeon[i].length
1 <= m, n <= 200
-1000 <= dungeon[i][j] <= 1000

Solution 1: Binary Search & DP

Binary search to choose a initHealth of the knight which can survive and reach to the bottom left cell.
    Minimum value left = 1, maximum value right = (m+n) * 1000 + 1 (because in the worst case, value of all cells in the grid is -1000).
    mid = (left + right) / 2.
    If isGood(mid) then:
        ans = mid
        right = mid - 1 // Minimize init health as much as possible
    Else:
        left = mid + 1 // Increasing init health
To check isGood(initHealth)
    The knight has an initial health point in cell (0, 0).
    Let dp[r][c] denote the maximum health we can get and we can reach from cell (0, 0) to cell (r, c).
    Finally, if we found a path which dp[m-1][n-1] > 0 means the knight can survive successfully which this initHealth.

Saturday 2021-10-02 12:17:29 by William T. Ronan

Delete Test some bullshit help me make a branch god damnit


Saturday 2021-10-02 12:39:30 by Terence Eden

A few more insults

Heaven truly knows that thou art false as hell. - Othello Act 4, scene 2 Out of my sight! Thou dost infect my eyes. - Richard III, Act 1, scene 2 Thou art a boil, a plague sore, an embossed carbuncle in my corrupted blood. - King Lear, Act 2, scene 4


Saturday 2021-10-02 14:25:03 by Clark Kromenaker

After a bit of investigation, I think my current texture and UV setup is OK. Though I am passing pixels to OpenGL from top-left (though it expects bottom-left), we are also using DirectX convention for UVs (upper-left) rather than OpenGL convention (bottom-left). This "double inversion" results in textures displaying correctly in-game. At first, I thought I should fix this, but then I realized that would cause trouble with SDL and probably DirectX later on. So, I'll leave it as is.

I also realized that the UVs used by the BSP are already correct (they look fine, and comparing to original game shows expected results). As a result, I believe the UV scale/offset data in the BSP surfaces IS NOT meant for the diffuse textures. Maybe it's only used by lightmaps, since they look totally messed up? I'll need to experiment with separate UVs for lightmaps. Some research into Half-Life's engine shows they used separate UVs for lightmaps.


Saturday 2021-10-02 14:31:17 by Karel Ha

Compete in AtCoder ABC 221

standings: 1858/8123 (8320 competing) on p. 93/519 -> PERCENTILE >77% ( >77%) :-/ ^^ rank: 1856/8297 [median rank] -> PERCENTILE >77% :-/ ^^

  • after long time, though! ^_^ rating: +13 -> 1047 [5 KYU] Highest! performance as 1147 [median performance]

Analysis

  • Tight/stressful arrival.
    • Brush Teeth breakfast 1-2 min before the start
  • Coming back to ABC after 1-2 months.
    • Busy life :-/
  • I came late ~30s.
  • first 2 problems in <7 min ^_^
  • first 3 problems in <20 min ^_^ :-)
  • maybe took too much time over D
    • to think through/plan too thoroughly => BETTER THOROUGH PEN & PAPER/PLANNING THAN RUSHED/FRUSTRATING DEBUGGING!
  • 4/4 ACs

A:

  • presubmit bugs
    • can't use 32<<(A-B) as 32**(A-B) => DON'T GO INTO BIT TRICKS UNLESS SURE/CLEAR!!
      • USE LOOPS/SIMULATIONS INSTEAD OF BIT TRICKS WHEN UNSURE!

B:

  • simulation over all possible swaps
  • hesitations:
    • how to incorporte no-swap? -> init result=S==T => CONSIDER USING EDGE/CORNER CASES FOR INITIALIZING IN BOOL RESULTS!
    • modify S or make copies of S -> swap and reswap back => PREFER IN-PLACE MODIFICATIONS OVER MAKING LENGTHY COPIES!! <- migh TLE!

C:

  • exhaustive search via bitmasks
    • only consider digits sorted decreasingly
    • split in all possible ways among 2 groups -> 2^(#digits) <= 2^9
  • hesitations:
    • use to_string()+stoll() vs. construct 2 group numbers directly
      • 1st better:
        • better check for empty string/non-positive strings/leading 0's
        • simpler to implement/handle, less strenous to write => PREFER TO_STRING()+STOLL() OVER CONSTRUCT INTEGERS DIRECTLY!!
        • IF POSSIBLE
  • presubmit bugs:
    • exception: stoll() on empty string
      • needed debug print => ALWAYS KEEP IN MIND CORNER CASE OF AN EMPTY STRING!!! => EXCEPTION WHEN USING STOLL() <- COULD BE CAUSE BY EMPTY STRING!

D:

  • sweeping line
    • over all pts of interests
      • day of opening interval
      • 1 day after closing interval rather then right the closing day
    • store how each pt affects balance/current count
    • update frequencies (i.e. ans, result) at each point
  • hesitations:
    • how to init prevPt?
      • init to 0, since days start from 1 => INIT PREV TO VALID STRICTLY LOWER VALUES!!
    • how to print out result as space-separated?
      • how to handle the last space -> endl? -> from map<ll, ll> ans store to vector<ll> result
        • and use cout on vectors => ALWAYS TRANSFORM RESULT TO VECTOR FOR SPACE-SEPARATED OUTPUT
    • need for assert(prevPt<pt);? => USE ASSERT WHEN THEY HIDE INTO TIME COMPLEXITY!!
      • BETTER SAFE THAN SORRY!
    • 1-based indexed k for ans, yet 0-based for result
      • be careful w/ transformation => USE .PB() TO CONSTRUCT VECTOR<LL> RESULT;!
  • could have implemented/finished faster
    • in the end, rather short code => DON'T HESITATE SO MUCH FOR SWEEEPING LINE ALGORITHMS!!
      • PRACTICE THEM -> BE CONFIDENT! E:
  • unsolved <- out of reach?
  • too large constraints
    • for O(N^2) solution
  • dp?
    • info/result for {<,=,>} A'_k?
  • sparse table/pre-compute?
  • loop over all subseq lengths?
  • for each length, at most O(log N) (or O(1)) work?
  • how to keep track of results with A'_k at various values/levels?
    • update would take O(N) too!!
      • somehow sparsely, over all powers of 2?
  • only 991 ACs / 1374 submissions
    • might be reachable... -> upsolve? -> editorial!

Signed-off-by: Karel Ha [email protected]


Saturday 2021-10-02 15:58:59 by Dudemanguy

wayland: simplify render loop

This is actually a very nice simplification that should have been thought of years ago (sue me). In a nutshell, the story with the wayland code is that the frame callback and swap buffer behavior doesn't fit very well with mpv's rendering loop. It's been refactored/changed quite a few times over the years and works well enough but things could be better. The current iteration works with an external swapchain to check if we have frame callback before deciding whether or not to render. This logic was implemented in both egl and vulkan.

This does have its warts however. There's some hidden state detection logic which works but is kind of ugly. Since wayland doesn't allow clients to know if they are actually visible (questionable but whatever), you can just reasonably assume that if a bunch of callbacks are missed in a row, you're probably not visible. That's fine, but it is indeed less than ideal since the threshold is basically entirely arbitrary and mpv does do a few wasteful renders before it decides that the window is actually hidden.

The biggest urk in the vo_wayland_wait_frame is the use of wl_display_roundtrip. Wayland developers would probably be offended by the way mpv abuses that function, but essentially it was a way to have semi-blocking behavior needed for display-resample to work. Since the swap interval must be 0 on wayland (otherwise it will block the entire player's rendering loop), we need some other way to wait on vsync. The idea here was to dispatch and poll a bunch of wayland events, wait (with a timeout) until we get frame callback, and then wait for the compositor to process it. That pretty much perfectly waits on vsync and lets us keep all the good timings and all that jazz that we want for mpv. The problem is that wl_display_roundtrip is conceptually a bad function. It can internally call wl_display_dispatch which in certain instances, empty event queue, will block forever. Now strictly speaking, this probably will never, ever happen (once I was able to to trigger it by hardcoding an error into a compositor), but ideally vo_wayland_wait_frame should never infinitely block and stall the player. Unfortunately, removing that function always lead to problems with timings and unsteady vsync intervals so it survived many refactors.

Until now, of course. In wayland, the ideal is to never do wasteful rendering (i.e. don't render if the window isn't visible). Instead of wrestling around with hidden states and possible missed vblanks, let's rearrange the wayland rendering logic so we only ever draw a frame when the frame callback is returned to use (within a reasonable timeout to avoid blocking forever).

This slight rearrangement of the wait allows for several simplifications to be made. Namely, wl_display_roundtrip stops being needed. Instead, we can rely entirely on totally nonblocking calls (dispatch_pending, flush, and so on). We still need to poll the fd here to actually get the frame callback event from the compositor, but there's no longer any reason to do extra waiting. As soon as we get the callback, we immediately draw. This works quite well and has stable vsync (display-resample and audio). Additionally, all of the logic about hidden states is no longer needed. If vo_wayland_wait_frame times out, it's okay to assume immediately that the window is not visible and skip rendering.

Unfortunately, there's one limitation on this new approach. It will only work correctly if the compositor implements presentation time. That means a reduced version of the old way still has to be carried around in vo_wayland_wait_frame. So if the compositor has no presentation time, then we are forced to use wl_display_roundtrip and juggle some funny assumptions about whether or not the window is hidden or not. Plasma is the only real notable compositor without presentation time at this stage so perhaps this "legacy" mechanism could be removed in the future.


Saturday 2021-10-02 16:50:05 by HeroGamers

Shit, good thing that I didn't commit directly to master

Forgot to remove a comma after the last item in the shortlinks list... gosh I'm stupid


Saturday 2021-10-02 16:54:20 by Frenjo

Fixes /obj/ density issue

Fixes the long standing object density issue, turns out some bitflag fuckery was happening. Special thanks to SierraKomodo, Geeves and Neerti for debugging help and other technical advice.

Also: GOD DAMN FUCKING YES BABY WE FIXED IT AFTER OVER A YEAR!

Changelog update included.


Saturday 2021-10-02 17:22:56 by Marko Grdinić

"8:25am. Since I've been awake in bed for a while, I thought it would be later than this. Yesterday I did not sleep well and was really drained the entire day, but now I am quite refreshed. Instead of going to bed at 12am like the usual, it was more like 9:30pm.

I had time to think this morning. Yesterday I had a hole in my mind, but now I had the time to really bolster my resolve.

If it was the usual me, I'd just sit down and start writing out scenes. I could do this for a full year, after which I could look into how to do art and music. But instead of that, how about I reverse the process. I should look into how to do the art right now.

This is what I should be cultivating. The writing will take a long time. But the success of the product will greatly depend on the visual and sound production. I do not have the money to hire artists and musicians so that is what I have to do myself. If I could surmount this challenge, I could at least get some minor critial success.

I won't actually look into GANs right away. Instead I want to look at the standard techniques.

Just picking up a pen and drawing will not get me anywhere with my weak skills. But if I could get the computer to assist me, I could go a lot further.

This is what I am feeling a lot of anxiety about. But that is how it should be. I've felt this in programming many times before when tackling a challenge I did not know how to do.

8:35am. I am going to have to look where the artists hang out and gleam some tips.

Any mail?

8:40am. One is from Eight Steps. The later place wanted to schedule an interview and I've been ignoring them for 2 days. I've said that I've decided to do my own project instead and apologized for ignoring them.

9:05am. Had to take a short break. What I need to do here is post the PL sub monthly review. I want to redo the Julia section. Let me paste it all again here, all 6 pages.

///

In September, though I should have been in job hunt mode, strangely enough, I spent most of the time studying Julia for the sake of probabilistic programming languages. I applied for an academic research engineering position at the end of August which was posted on this very sub. This was a mistake as the job fell through for the reason that the employer quoted a very high monthly rate and then corrected it down by 10x to half my minimum range the next day causing me a lot of whiplash. I'll be avoiding Julia research positions in the future.

Only bad things are happening due to the path I've chosen, so I am going to stop clinging so desperately to it and go the code monkey route of getting a 5k/month job and using that experience to iteratively get me something better. I am going to have to be decisive about this when the time comes. C#, Typescript, Python style backend jobs are the kind of plentiful wage slavery opportunities that I'll have to swallow my pride for and embrace.

Though I'll do that after doing some creative projects first. Who knows how long that will take. Hopefully forever.

--- Julia

I bad-mouthed it in the past on two separate occasions, but now that I've studied it closely, I can see its good and bad points more clearly. It is the kind of language whose good points are also its bad points. Julia is the kind of language which knows the virtue of power, but not the joy of slavery, so in the end it does not undestand power.

Good:

  • The combination of multiple dispatch, macros and dynamic typing makes the language particularly powerful.

Ugly:

  • No pattern matching.
  • No tail call optimization.

Bad:

  • Dealing with macros in a debugger is difficult. Also, having macroexpand is one thing, but it does not work on macros inside functions.
  • It's type system cannot be used before running the program to check for correctness.
  • Too much thought went into how to exceed Matlab and Python in performance and expressive power, and not enough thought at how to make that power ergonomic at scale.
  • The startup lag when importing languages can be excused, but sometimes installing new packages takes far too long and no notification how long it would take is given.

Julia is the best numerical computation language in the world. But as a general purpose language, it would have been better if it had been more restrained. For example, some of the PPL libraries like Turing.jl are close to impenetrable due to their use of macros. For most Julia code that would be of interest to me, I can't actually start up the debugger and step through it just to get a sense of even its control flow, like I could in most other languages.

I've heard that its compilation times can be bad, but I was surprised at how bad they are in some cases. When I installed RDatasets.jl for a Plots.jl example, just the precompilation took 62 minutes! Most of it was spent compiling Flux.jl the last package out of 80. I think I spent over 2h waiting in total. This killed my momentum for the day. And there wasn't a progress bar telling me how long the process would take so I was in the dark the entire time on how much time remained. It could have taken 5h or more for all I knew. I think after 10m of being in the dark the average user would abort the process and go back to Python.

I do not think any of its flaws really matter if you are just using it in the REPL to do numerical computation. There it really shines. But for PL work I am not sure if I'd consider it better than Python, let alone functional languages like F#. Its powerful features encourage a baroque style of programming at scale.

--- Spiral

In 2016, I made a prediction when working on an F# library that if GPU-based ML libraries were so difficult to make in it, then the situation would only be worse when AI chips arrived at the scene. I conjectured that GPUs being so hard to handle was why Python was the only language that had a decent ML story. So I wanted a language that could solve that problem, so that I'd be ready to program at my maximum comfort when future hardware arrived.

I think Spiral v2 itself in its current form meets all my goals of being a powerful, easy to use general purpose language that ML libraries for any kind of novel device can be made in. To support programming them natively in a functional style, making a C backend for such devices that has ref counting would be less than a week of work that I do not feel like sinking into now. It would be a completely different story if I had access to such devices.

So the language work will be put on hiatus until the situation changes, and I'll use its repo as a blog as usual. It sure would be great if some company just sponsored this kind of work. I’ve tried applying, but haven’t gotten any interest. It would be ideal if somebody at these places read my PL sub reviews and sent me an offer.

--- Review of my experience in RL

Totaling up the time I did reinforcement learning in 2018 and now in 2021 comes to over a year. I tried so many tricks from higher order optimizers, to various kinds of different RL algorithms, some of which of my own invention. The end result is that none of them succeeded in improving on vanilla RL in ways that would be significant to my naked eyes.

For example, I praised distributional RL, but after trying it out I realized just how much having extra outputs in the final layer impacted memory consumption. So vanilla RL has one output, and distributional RL has 2k, what is the big deal?

Float32 (4) * Batch size (512) * Seq size (avg 8) * Num Actions (2k) * Num Dist Values (2k) = OH SHIT, I DON'T HAVE ENOUGH MEMORY!

I think I am missing something above because I remember the expenditure being in the terabytes. Ah, what I am missing is that because I was using a transformer instead of an RNN, I had to make the above calculation for every step in the sequence. The way to fix that is to recalculate the forward part which increased the total computation time by 50%.

Also, instead of taking a softmax over all the actions, because of distributional RL taking up so much memory I had to sample like 16 semi-randomly. This means introducing a MLP module on the top of the transformer instead of having a linear layer over all the actions. That hurt performance quite a bit. Categorical distributional RL is also slower to optimize during than expected RL which really hurts when combined with non-stationarity of poker, and how long Holdem takes in general to train.

Transformers have great performance in various different domains of ML.

For poker I brought them in so the GPU would have more work to do on longer sequences. In poker you tend to have mostly short betting streets with sporadic longer ones. GPUs don't like that, they like it when you batch everything. This is like cramming all the stuff into one big sandwich and shoving it into their mouth all at once. This actually presents significant difficulties just to get them to train on poker. I had to CPS the simulator to do checkpointing on every game node in order to batch the inputs. It would have been a lot easier to do it in online mode.

Due to how much memory they consume, I couldn't make the net have any bigger 5 (512 wide) transformer layers followed by 3 (512 wide) feedforward layers in the head module. I regret this.

I thought that bringing in transformers would improve generalization capabilities, but they never worked better than MLPs. I only succeeded in finding a novel architecture that is quite stable in RL, but it wasn't any better than the MLP in the end. That was a disappointment.

I am harping on distributional RL and transformers to showcase the theme I've been running into, in that much of what are presented as advancements are really domain specific tradeoffs and you can in no way assume that transformers > RNNs, and distributional RL > expected RL despite what the benchmarks show. This is a huge disappointment to me, to know that there are hidden costs the papers do not document.

For example, in 2018 I experimented a lot with KFAC which is a higher order optimizer, but decided against it in 2021 in favor of signSGD. Why? Because the performance improvement was not that notable, but it increased the implementation complexity by quite a lot. Also it was one thing to hack it myself in the old Spiral's library, but quite another to do it in PyTorch which is less flexible in comparison.

I kid you not, literally not a single thing despite a year of trying made RL unambiguously better other than using more computation.

Training on Holdem did not work for me. Just before I threw in the towel for the final time back in mid August, just what did I try that actually made it work better? Increasing the batch size implicitly by doing more runs in the simulator before the optimization step.

I owe an apology to OpenAI. Given their links to the rationalist community I held a grudge against them for wanting to help humanity, and also approaching the problem by just plowing more computation at the problem. I was actually mad when I read that their Dota agent used a batch size of 1 million. In my view, groups like Deepmind and OpenAI should be giving me better algorithms to use instead of putting on circus shows. So I made up my mind to show them how it is done.

But I just did not have the experiences of guys like Hinton of working on NNs since the 80s. I only entertained the conjecture that computation is everything - deep down I believed I should be able to find something to make things work better.

Now that I've experienced the harshness of RL, I sympathize with that position much more and I won't hold a grudge towards big research outfits for just using bigger computers and calling the problem solved. If they can make AGI that way, all the more power to them. By all means feel free.

--- Future Of Programming

The thing about NNs is that they are a special purpose thing. They are slow and bulky right now on the GPUs, but it will be different on AI chips. That would allow them to be used to do approximate memoization of simulators for example without slowing down the system by orders of magnitude like now.

What could I have done to actually make the Holdem agent work better without relying purely on the standard RL training?

One of the things is the same thing as AlphaGo - Monte Carlo Tree Search. If you do proper Bayesian conditioning and replacement in the simulator - much like CFR does, and use that likelihood information to weight the rewards, it is possible to make MCTS work even for hidden information games. Previously, CFR was extremely mysterious to me, but now I can almost see it being capable of being expressed as a probabilistic program sampling solely from categorical distributions. There are some missing pieces as far as variance reduction is concerned that none of the existing PPLs support, and sampling tabular CFR does provide a view on how credit propagation could work in nested probabilistic programs.

I got the idea only last month. There are some papers on using MCTS with hidden information games, but I haven't looked into them yet. Still, I can claim the achievement of understanding Bayesian inference enough to come up with this independently.

Another thing I could have done is have a net predict given a hand the % showdown vs a random opponent hand, both for the current and future streets. This is something you'd write a simulator for - given a hand, you just sample random opposing hands and run them forward for a bunch of iterations, then score them. At the end you get a win %. The NNs are really horrible at learning to hand read directly from the rewards, even with 100s of millions of hands I still got a 10-20% error rate where it asymptotes.

A NN could be trained in a supervised fashion to memoize this. Right now this would be slower than actually running the simulator, but it should be faster on future hardware. Doing it in a supervised fashion against a lower variance target should help things greatly. And in the RL agent replacing the hand features with the current and future win % estimate would help sample efficiency greatly.

The same goes for MCTS. Just using the simulator for it would run into a combinatorial explosion, but using a NN based policy net would give the needed memoization to work around that. This scheme would give me decent players from the start. I expected they wouldn't be folding flushes and trips to a single bet on the flop or get stuck in weird behavioral local minima. Since the sequence steps are short in poker for each hand, it might be a better choice than training a value net for many steps.

Even if I did that though, it would still have the same computational intensity as regular RL. For much the same reason I haven't tried something like using a GAN on raw inputs before passing them onto the RL module. Instead of 10k hands like now, I'd have gotten 1k hands per second in that case. I just don't have any reason to think GANs could learn to do things like read hands on their own.

And while MCTS would make a better player, I have no reason to assume that it would make training better vs just increasing the batch size. Using MCTS would reduce my training performance to dozens of hands per second even though each of the hands would have much lower variance. Who knows if the tradeoff is worth it.

The way I see to actually succeed at RL, and anything else in ML is to do the opposite of the advice of doing end-to-end learning and finding more spots like eliminating the need for the net to learn to do hand reading. That would definitely improve sample efficiency.

This is the way I see programming going forward. It is a style of writing simulators and using NNs to memoize them. Instead of expecting NNs to do things like now, we will be expecting simulators to do the right things, but NNs to guide them towards likely areas. The NNs won't go away, but training them end-to-end and expecting them to learn anything significant will be seen as a minor phase in hindsight. This also makes me bullish on probabilistic programming + variational inference which offers a framework for composing those simulators.

--- Future Of My Programming Development

My path is just so weak. This is the first time I've started thinking this way. It is not my talent, abilities or skills that I doubt, but my path.

What use are programming skills if they don't get you what you want? Working on RL and continually improving the model to get you better and better performance, enough to win in real life would have been so cool. That is what I've been dreaming about. Instead I am getting...this.

I need a different approach. All this time I've been ignoring that it is easier to make games than to beat them. It is far easier to make a poker, or chess, or a Go game than it is to train an agent that can conquer them.

I'd like to find a style of programming that would improve my creative capabilities.

Right now, I am pretty confident of being able to write a story, but would be great for standing out if I could do the graphics and music for it as well. I am not good at the latter. I do not know how to draw and compose. I was terrible in art class. Maybe I could get better if I practiced, but instead it might be possible to use NNs to help me here.

For backgrounds, would it be possible to use something like style transfer?

Even if generating 1024x1024 images would be out of reach on my current hardware, could I do something with probabilistic programming to generate things piece by piece? I don't know. I do not have particularly good ideas apart from style transfer, but I haven't really tried digging into this part of programming. Maybe it is worth giving it a try.

Rather than trying to conquer the world directly using pure power, maybe I should be controlling people with my mental powers. Back in 2014, I felt that I would be a charlatan if I just wrote stories about the Singularity instead of actively pursuing its ignition, but I've really fucking tried. So I am going to lift some of my self imposed restraints.

If I want money, I should be making games and selling them. I am not exactly expecting a code monkey salary, but even if it would be 0.5-1k a month, that would be suitable for the kind of life I want to lead. As long as it's made with my own power, I am fine with the outcome. I've been applying to places for the past month and a half, and one thing I've realized is that the kind of work that you do is more important than how much you get paid. I'd rather write 30 games or novels, than make 30 ASP.NET backends even if it nets me less pay.

The game I have in mind would be 90% a novel. The rest I would spice up with RPG elements as pure novels do not have the impact of bad ends and branching. To start myself off on the new path, I'll make the character stats into probability distributions rather than dumb scalars. I'll use Infer.NET to do inference over them.

This should make it a stochastic novel. I've never seen anything like that before, but it should be a good fit for the computer.

My heart is overflowing with desire, evil and inspiration. If through my writing I increase the global AI risk by a few orders of magnitude and get the money needed to buy an AI chip, that would make the effort well worth it. In 2014 I did not have any other options than to just churn out text, but now I should be able to do just a bit more.

Since some of you guys supposedly like reading my stuff, do support the work when I release it in an app store. It will probably take me at least six months of work and probably way more to get something good done. The last time I tried writing, after 8 months I was a shell of a man and had fallen into madness. I still haven't recovered from that. Money is the only cure for this kind of affliction.

///

Ok, done. This is good enough.

9:40am. Enough dwelling on the past. I need to focus on the future. The way the negotiation went was regretable, and was caused by my inexperience causing me to put too wide a range of what offers are acceptable. If I had been more realistic and cautious at what I could get at the high end, maybe I would be working on a PPL right now.

There is an inherent stochasticity to my decision making that made me slip into irrational optimism.

9:50am. Maybe it will work for me in creative work.

  1. Learn how to do visuals and graphics.
  2. Learn how to do sounds and music.

Writing and programming I have down. I need to tackle this challenge.

Now focus me. Where should I start the search? How to pro artists get to their level?

9:55am. https://www.youtube.com/results?search_query=how+to+draw

At least studying this stuff should be a bit fun.

10:05am. https://youtu.be/ewMksAbgdBI?t=432

Why are the frames and the sound of the video overlapping?

https://youtu.be/ewMksAbgdBI?t=538

This is good advice. I hadn't known how to sketch myself. The art classes in school weren't worth shit since I did not learn this. Yeah, schools should be abolished at some point.

10:20am. Ok, why not watch all in the series. There are plenty of videos here. I should know the basic old school techniques before I dive into GANs and things like that.

Ok, focus me, let me go to the next thing.

In terms of skill I do not have to reach a godlike level like Murata. Even basic art and music will make a huge difference.

I have to entertain the notion that I am bad at drawing simply because I did not master even the basics properly. I mean, I did not know how to sketch, so I just excused it as a lack of talent on my side.

10:30am. I am getting lost in thought imagining some of the scenes. Sometimes I think about Simulacrum while I was programming in the past.

Let me watch more art lessons.

https://www.youtube.com/watch?v=JQ5deOlbyd0&list=PU26yPlaptBN_X2FfWbJ-iCw&index=62 Why You Suck at Drawing

Let me watch this.

10:40am. Let me take a short break here. I am not really willing to commit to getting a sketchbook, and a pencil and start drawing that way. I'll watch these videos, but I'll also want to look for how to do it all using the computer. It would not be a bad idea to master the art of drawing just by setting Bezier curves. That could carry me far.

But not like in Processing, I need something more advanced this time. It is just very hard to draw using just code. Maybe Photoshop would allow me to do it more interactively. Never used the program so I do not know.

11:05am. Let me resume.

https://www.youtube.com/watch?v=ByTxhyGtk-g&list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&index=4

I am going to watch the stuff on this playlist, but afterwards I'd want to get familiar with computer drawing rather than the pen'n paper art. I have no intent to depend on my hand eye coordination.

What I really need to get a grasp on are proportions and colors. Right now, I could not even pick out a skin color from a whelie.

Let me watch another video and then I'll have breakfast.

https://youtu.be/ByTxhyGtk-g?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=242

This is an interesting way of finding the center of the roof.

https://youtu.be/ByTxhyGtk-g?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=450

This is really nice, looking the sketch I am really struck with a sense of perspective.

12pm. Thought of a blurb.

$$$ A person who loves 1000 people evenly is mostly indifferent to any particular one of them. All emotion is attention. And attention works best when it is focused on things that really matter. A good heuristic is to focus your attention on things important to you, and the most important thing should always be your own self. If it is not then you have a problem. - Loading Blurb $$$

12:05pm. https://www.youtube.com/watch?v=F65D4ej4f-M&list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&index=5

Let me get breakfast. After that I'll give this a watch.

1:15pm. Let me read another chapter of Sefiria and then I will resume.

2:05pm. Let me resume. I slacked enough. Yeah, this is my usual pace. I can't deal with job applications after all. If I can learn to create art and music that would be a huge advantage for game making that could actually make the path viable. I have my writing muse, and with that I could make VNs. Maybe that will net me just a little. When the hardware gets better, that will allow me to make fancier games.

2:10pm. Now focus me. Let me watch a few more of his vids and then I'll look at computer drawing.

2:50pm. https://youtu.be/01dLvv9RPVQ?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=27

He says he spent 15m tops on it. This is actually quite good. I am not sure I would be able to do this even if I was given a whole week.

https://youtu.be/01dLvv9RPVQ?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=209

He makes a really good point about drawing the nose. And yes, I remember trying to draw noses decades ago and never understanding why I could not get the feeling.

3:25pm. https://youtu.be/lz33416kapQ?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=310

The video quality for this is so bad.

I am getting bored of this to be honest, let me just watch this and the perspective video and then I am going to look into digital art.

https://youtu.be/lz33416kapQ?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=565

I don't get why it would take him 15m for his self portrait, but a lot of time for the cup.

3:40pm. https://youtu.be/CGB9VqSCRLU?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a Learn to Draw #10 - Proportion Basics

Ok, I get that to get realistic images I need to get the shading right. Not that I was good at that, but I always thought that my proportion sense was poor. It just always came out wrong on the paper no matter what.

3:45pm. Ok, focus me. Let me watch this. After that comes digital art.

4:10pm. https://youtu.be/yy81BjqGRu0?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a

This is boring, but let me do just another video from this guy. After that I'll take a break.

https://youtu.be/yy81BjqGRu0?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=411

It is really amazing to see him draw this and see my own perception change.

https://youtu.be/yy81BjqGRu0?list=PL1HIh25sbqZnkA1T09UtVHoyjYaMJuK0a&t=469

You have to think in terms of masses instead of lines.

4:25pm. https://www.youtube.com/watch?v=hnZfnoVFpK8 Digital Art for Beginners: How to Get Started Quickly

Let me watch this.

https://youtu.be/hnZfnoVFpK8?t=135

(After comparing it to a rock) But you can draw with a mouse and some artists are quite good with it.

This is interesting. I'd like to try figuring out how to make the mouse work before investing in anything else.

https://youtu.be/hnZfnoVFpK8?t=162

He suggests I get a vector program like Adobe Illustrator. I've been thinking about Bezier curves. The Spiral logo and those poker card icons are the only real 'art' I've ever done if you can call it that.

4:40pm. https://youtu.be/WQWp8kc-EVw 10 DIGITAL ART Mistakes

This guy uses a tablet. I'd want to start out with vector art isntead. There are probably tutorials on that. I'll have to check them out. But let me watch this first.

https://youtu.be/WQWp8kc-EVw?t=332

Huh, .png is lossless? I never knew.

5pm. That was fairly informative. Let me do a bit more.

https://youtu.be/OVUkizjDctw How to use 3D Models correctly】Ultimate Art Hack?

Let me watch this. Making 3d models and converting them to 2d is something I've been thinking about. I looked at vector art and there is some impressive looking stuff. I should learn vector art, as it won't strain my hand eye coordination. I'll still have to master proportions, shading and use of color, but hopefully learning that will be more forgiving than traditional art.

https://youtu.be/OVUkizjDctw?t=71

Wow, if I knew that being an artist is as easy as drawing over 3d models I would be a professional in my first half a year. Why do we pay artists so much again.

Sounds good to me. I do not mind being a tracer at all.

What is this Clip studio? I could probably use NN style transfer for backgrounds, but for characters I'll have to come up with them myself. I do not want to copyright infringe on other artists.

5:30pm. That was pretty interesting. When you wander around it seems you learn all sorts of things.

He mentions towards the end that clothing and accessories need to be drawn by hand.

https://youtu.be/NEvMHRgPdyk 3 STEPS TO INSTANTLY FIND YOUR STYLE| NEVER draw from IMAGINATION!

Let me watch this.

5:55pm. That was amusing. Let me watch a vector art video. It is getting late.

I think I'll find an Adobe Illustrator torrent and set it to download.

Before I do that, let me just see if it has a student option or something like that first.

There is an 1.8gb torrent. Ok, no prob.

6:05pm. Oh, it finished downloading while I was browsing around.

https://www.youtube.com/watch?v=TswPqn5bdeI What is Vector Art?

6:10pm. I haven't started watching the above, instead I am looking at the video selection. I haven't found any focused series of tutorials on this, but I guess I will. I should look at what Adobe itself has.

https://youtu.be/TswPqn5bdeI?t=265

Gradient meshes...this is interesting.

6:50pm. Done with lunch.

https://youtu.be/TswPqn5bdeI?t=355

These examples look really good. He mentions he would be able to do the same thing in much less time in Corel Painter.

7:05pm. I am not really happy with the styles I see in the thumbnails, but to be sure, the colors, proportions and shading is all there. Mastering this seems like a more likely path than picking up a pen. I should be able to make something with that kind of skill. I should believe in myself and go for it.

It should be possible to do things in a more anime style.

After this, what will come is music. There is a particular piece I want to compose for when the NPCs are doomed that has caught my imagination.

7:15pm. I've thought of a boredom song when the MC is at school. Tick tocking of clocks, snores and yawns would capture the mood perfectly.

7:20pm. Let me close for the day here. Tomorrow, I will watch a bunch of videos and then start practicing myself."


Saturday 2021-10-02 19:00:37 by Mario Kleiner

PsychHID/Windows: Add basic multitouch input support.

This adds a basic implementation of touch input for MS-Windows 8 and later.

Limitations wrt. Linux/X11 implementation:

It currently exposes exactly one touch input queue, which allows to receive touch input inside exactly one associated onscreen window (and a special mode where touch input is taken from the whole desktop, if a windowHandle of 0 is passed in, blocking any other user input on the desktop).

-> Iow. only one onscreen window can be touch-enabled/receiving in any active Psychtoolbox session.

Touch events are taken as a union of input from all connected touchscreens.

-> No way to differentiate between separate touch input from multiple connected screens at the moment.

Given the "exactly one window per touchqueue and exactly one touchqueue per window" relationship of touchqueues to windows, and the fact that this implementation only provides one queue, this is more limited than the Linux/X11 implementation, which has at least (n+1) queues for n touch devices, ie. one per individual touch device and then also "master pointer" queues which represent the union of all (core pointer) or subsets (added master pointers) of touchscreens. In this sense, this Windows implementation exposes its touch queue as the equivalent of the Linux/X11 "master pointer / X core pointer" queue.

-> Each touch event only reports (x,y) touch position, timestamp, optionally width and height of touch bounding box, depending on touchscreen device, and an "extraData" value which is optional, not supported on my two test touchscreens, and undefined in its meaning by Microsoft docs, iow. could be anything from pressure to confidence, rotation angle, proximity etc. These are limitations of the underlying WM_TOUCH messages, iow. a Windows OS limitation as of Windows-10.

Iow. this provides a subset of the functionality and flexibility available on Linux.

How it works:

Touch input is received and handled via the Windows touch input api. Our kbqueue processing thread now maintains its own Window class for touch input handling (or windowing system event handling in general), with its own associated WinProc() event handler and message pump driven by/in the thread mainloop.

RegisterTouchWindow() is used to register for touch input events on a window, so they can get processed by our new WinProc handler and enqueued in a new dedicated "touch input" keyboard queue with index "touchQueue".

Ideally we'd RegisterTouchWindow() on the already opened Screen() onscreen window. Unfortunately that is not possible, as win32 only allows a thread (=our kbqueue thread) to receive winsys events for windows that the thread has created itself via CreateWindowEx. As PTB onscreen windows are created on the runtime main interpreter thread and not the PsychHID kbqueue thread, this is a no-go. I tried, it fails with "permission denied" error code, as stated in the docs for RegisterTouchWindow().

A second option would have been to RegisterTouchWindow() from the main thread in PsychHIDOSKbQueueStart() on the PTB onscreen window, ie. on the main thread that created that window (albeit inside Screen mex instead of PsychHID mex, but win32 doesn't know that), and then maybe use AttachThreadInput() to redispatch all events received by the main thread to our kbqueue thread. Problem with that is that in my understanding (not tested, but deduced from earlier failed experiments with AttachThreadInput()), the main thread would still have to drive the message pump to get events from its win32 event queue to the PsychHID kbqueue queue. Iow. Touch input would not be processed at time of touch event delivery by Windows, but only as part of general event processing as triggered by Screen('Flip'), GetMouse, KbWait - iow. a purely synchronous model. While this is not impossible to do, it would add handshaking headaches or pseudo- races between OS event delivery, pumping on the main thread driven by user scripts calling suitable trigger functions at a frequency and timing in the sole discretion of the user script -- iow. maybe not often enough or at the right points in time, and async reception by the kbqueue processing thread. This can create lots of correctness issues, because it is highly dependent on user scripts doing the right thing without users understanding what the right thing is and how to implement it in any given experimental paradigm. So this option has been discarded as too likely to cause hard to debug bugs.

A third option would have been to register for raw input, as that can apparently be processed from anywhere in the system on any thread, or at least that's what i think it does, from a cursory read of the documentation. Problem with that is that we'd get unparsed HID raw events, instead of touch events, so we'd have to implement our own HID driver for touch input, possibly for every variant of touchscreen out there -- very high recurring costs, need for reverse engineering!

Anyway, so the solution we are going for is yet another hack, which luckily works, at least atm. as of Windows-10 May 2019 update on one test machine, and more recent October 2020 20H2 update on another machine:

  • The kbqueue processing thread creates its own window "touchWin" for reception of touch input events, ie. RegisterTouchWindow() on it. The window is created/destroyed inside the thread main loop as needed, ie. whenever TouchQueueStart / TouchQueueStop is called.

  • The window has special windows property set, these are key to this hack working without impairing visual stimulation for the associated PTB Screen() onscreen window:

    WS_VISIBLE so it is logically "visible" and can receive input. WS_POPUP to remove all its window decorations, borders, title bar etc., so only the cient area would be there.

    WS_EX_TOPMOST, so it is located on top of the z-order of the window stack, in front of the PTB onscreen window, and with its client area exactly identical to the associated PTB onscreen window, so it "covers" and "occludes" the onscreen window, so any input events like touch input will end up in its event queue, instead of going to the PTB onscreen window. This is crucial!

    WS_EX_NOACTIVATE, so it doesn't become the active foreground window or keyboard input focus window ever! Not at creation, not if it receives actual touch input or mouse clicks! This is crucial to retain proper visual stimulus onset timing and timestamping for fullscreen onscreen windows! Why? If the touch window would become the active window, it would take the active status away from the PTB onscreen window. This would disable pageflipping/compositor bypass on the onscreen window, it would go through the DWM compositor and visual timing would be toast!

    As a mild optimization, on Windows 8 and later, we also apply WS_EX_NOREDIRECTIONBITMAP to tell the DWM that no redirection surface is needed, as the window will not ever display any visual content.

    The net result is a topmost invisible inactive window covering the area of the onscreen window, which receives all touch input events for touches inside the onscreen windows client area.

I was surprised this hack would work, but at least on the two tested Windows 10 machines with May 2019 and October 2020 20H2 update and single-touchscreen and dual-touchscreen config, it worked. Visual timing was indirectly verified as seen in MultiTouchMinimalDemo with verbosity flag 2 -- It doesn't report input focus loss for the onscreen window, one requirement for DWM bypass and good timing. Also the new Screen('Preference','VisualDebuglevel', 6) debug setting assigns WS_EX_NOREDIRECTIONBITMAP to PTB onscreen windows, so if they go through DWM desktop composition, their client area will turn transparent. This didn't happen during testing, indicating successfull compositor bypass for PTB controlled pageflipping. PTB's internal timstamping consistency checks also didn't trigger.

As a countercheck, without WS_EX_NOACTIVATE, DWM bypass was indicated by transparent onscreen window, input focus was reported as lost, and most of the time PTB's timestamping checks would trigger an error.

So i'm reasonably convinced this doesn't interfere with timing on a "single onscreen window per session" setup for the single touch enabled onscreen window. And multiple onscreen windows are broken on Windows 8 and later timing-wise anyway, so this won't make it worse than it already is.

Note we use touch event timestamping, remapping event msecs timestamps to GetSecs() time. The quality and accuracy of touch input timestamps on Windows is unclear atm. as our KeyboardLatencyTest() for touch input was very noisy and inconclusive.

Tested OS versions: Windows 10 May 2019 edition and October 2020 edition.

Tested hardware: Microsoft Surface Pro 6 with internal touchscreen and also with RaspberryPi touchscreen attached as secondary screen.

PC darlene with dual-display, one regular monitor, and the RPi touchscreen as second monitor, which needed to be set as primary monitor / main monitor, both for proper touch input coordinate mapping and for proper onscreen window visual timing.

One curious limitation was observed on the MS Surface Pro dual touchscreen setup: Only touches from one screen at a time were reported. If a touch sequence was initiated on one screen, touch input from the other was ignored/not reported at all until the touch sequence was finalized. There seems to be a MS-Windows OS limitation of "only touch one touchscreen at a time". Weird, and probably inconvenient for some experimental scenarios, but it is what it is...

Some of the prep work / infrastructure work for this commit is already part of Psychtoolbox 3.0.17 in an inert state for a while. This is just the "enabling commit" which adds the "secret sauce" which took very long to figure out.

This multitouch touchscreen support for Windows was sponsored by Mathworks. Thanks!

Signed-off-by: Mario Kleiner [email protected]


Saturday 2021-10-02 20:02:47 by Fabricio C Zuardi

basic react with basic webpack config

Boring / frustrating / infuriating stupid boilerplate for web apps in 2021.

If you dont like "create-react-app" you live in pain.

This patch contains a code formatter setup (prettier), dependencies for modern javascript and jsx to be transpiled (babel), a development web server (webpack dev server), and a webpack config that have the most basic render-react-inside-an-html setup.

Dont be like me, use the stupid cli to create boilerplates instead.


Saturday 2021-10-02 21:04:35 by Droper2714.exe

added mile and fixed some capitalization errors pog

I hate my life


Saturday 2021-10-02 21:45:39 by Joe Romeo

Update README.md

Fixed a small typo. I don't know if you're participating in Hacktoberfest or not, but that (and the fact that I love your project and built a scoreboard for my brother earlier this year) is why I'm here!


Saturday 2021-10-02 21:59:34 by Pickle-Coding

The mole counts in the min checks in reactions.dm will now be multiplied by the inverse of the consumption multiplier. (#61557)

The moles in the min checks in reactions.dm will be multiplied by the inverse of the consumption multiplier. This will allow reactions to fully react when it has available gasses. It also makes some of the code easier to read.

Fixes (#61380)

From the issue report above:

Alright. So there's this pattern in reaction code

Looks like this

tgstation/code/modules/atmospherics/gasmixtures/reactions.dm

Lines 597 to 604 in 00154ae var/nob_formed = min((cached_gases[/datum/gas/nitrogen][MOLES] + cached_gases[/datum/gas/tritium][MOLES]) * 0.01, cached_gases[/datum/gas/tritium][MOLES] * 0.1, cached_gases[/datum/gas/nitrogen][MOLES] * 0.2) var/energy_produced = nob_formed * (NOBLIUM_FORMATION_ENERGY / (max(cached_gases[/datum/gas/bz][MOLES], 1))) if ((cached_gases[/datum/gas/tritium][MOLES] - 5 * nob_formed < 0) || (cached_gases[/datum/gas/nitrogen][MOLES] - 10 * nob_formed < 0)) return NO_REACTION

cached_gases[/datum/gas/tritium][MOLES] -= 5 * nob_formed cached_gases[/datum/gas/nitrogen][MOLES] -= 10 * nob_formed cached_gases[/datum/gas/hypernoblium][MOLES] += nob_formed

We take the minimum of a few values, theoretically because we want the reaction to run with the lowest amount feasible. So if there's 20 plasma, 10 o2, and 2 n2, and the reaction takes 4 parts plasma, 2 part o2, and 1 part n2, we'll only end up using 8 plasma, 4 o2, and 2 n2. Since we can't react without the n2 and all.

The if check is there to serve as a backup and prevent negative outputs, theoretically because they wreck havoc, though honestly they don't really, so long as the right bitflag is returned the whole mix is garbage collected anyway. Alright sanity check though. that's fine.

You notice how here, because he removes 5x nob formed from tritium, he includes that in the if check? If you scroll out to the right you'll notice that he multiplies the inputs in the min by the inverse of their scalar. At least that's what we want trying to do, mixed the two up.

You get the picture. The min serves to get the lowest possible amount to remove so the reaction can go through, and the if check serves as a sanity check wrapping around it.

The issue is people have been misusing it, for a good while. (Including in this instance) They most commonly forget to include the inverse scaling in the min(), which leads to these weird fucked phantom gas minimums that aren't listed anywhere, but still stop the reaction.

See:

tgstation/code/modules/atmospherics/gasmixtures/reactions.dm

Lines 690 to 697 in 00154ae var/heat_efficency = min(temperature * 0.3, cached_gases[/datum/gas/freon][MOLES], cached_gases[/datum/gas/bz][MOLES]) var/energy_used = heat_efficency * 9000 ASSERT_GAS(/datum/gas/healium, air) if ((cached_gases[/datum/gas/freon][MOLES] - heat_efficency * 2.75 < 0 ) || (cached_gases[/datum/gas/bz][MOLES] - heat_efficency * 0.25 < 0)) //Shouldn't produce gas from nothing. return NO_REACTION cached_gases[/datum/gas/freon][MOLES] -= heat_efficency * 2.75 cached_gases[/datum/gas/bz][MOLES] -= heat_efficency * 0.25 cached_gases[/datum/gas/healium][MOLES] += heat_efficency * 3

They need to be updated to not do this, and use min() properly. It leads to dumb graphs like this https://www.desmos.com/calculator/xufgz8piqw (Healium formation graphed out. p is freon, b is bz. if either are reduced below 0, the reaction stops. If you notice this leads to really strange scaling deadspots, and a lot of frustrating behavior)

Thanks to @GuillaumePrata for bringing this to my attention, love you man.


Saturday 2021-10-02 22:51:06 by Beyley Thomas

fix fuck bullshit with bitchass motherfucker ass motherfucker aka update the conversions to be more accurate and allow for better typing


< 2021-10-02 >