there were a lot of events recorded by gharchive.org of which 2,144,997 were push events containing 3,162,741 commit messages that amount to 244,147,813 characters filtered with words.py@e23d022007... to these 29 messages:
Improve the job assets for Summer 2023 job hunt
The resume's basically good, but I wanted to fix stuff that had been bothering me. I added a note of my English proficiency since I'm applying in a non-English speaking country.
The cover letter now is more oriented toward what I'm looking for from a job rather than trying to appeal to what a company is looking for in a candidate. I'm doing it this way partly because it's a pain in the butt to constantly have to rewrite parts of my cover letter for every company, but also, it's emotionally painful to have to describe how I'm going to be the perfect candidate for a job I frankly don't care about.
My hope is that this gambit pays off in the specificity of my desires being appealing to those that could fulfill them. We'll see I suppose.
Extract nkError diagnostics data
nkError
has a reportId
which unfortunately opens any possible
"report" crawling its way in there. This commit introduces a new
design where nkError
owns it's diagnostic data, as well as,
ast_types
where nkError
and friends are defined, owns the data
types. Report
and all the associated cruft now is there simply to do
rendering.
Report
still handles, routing and final consolidation, but in future
developments that will likely evolved further.
Overall Change:
- AST/Sem drop
ReportId
and associated memory leak - AST/Sem now define a diagnostic type and are primary data owners
Report
is now used in legacy situations and rendering, the former is to be addressed in future commits
What didn't change/major caveats:
structuredReportHook
still usesReport
, this consequences...concept
and{.explain.}
related erros have regressed, their tests marked with known issue, as this isn't a full conversion- diagnostics involving magics, have inconsistent rendering, a pre- existing issue, sorting this out is future work
Each module should own/define the data types that it's fundamentally in
charge of. Modules which are effectively components, like lexer
,
parser
, vm
, etc, should produce diagnostics, events, telemetry,
whatever and their caller should handle output/rendering of these. The
key part about that last point is that the renderer and its format must
not be pushed into these modules, rather the renderer should consume
what they produce (directly or via an intermediary/translation).
For those wondering about why a value type was use inside the nkError
instead of the current ref object
. I ran 3 bootstrap runs with each
version of nkError
, and with the TAstDiag
(value type) was
consistently ~10% slower during the bootstrap process.
run | type | real | user | sys |
---|---|---|---|---|
1 | pastdiag | 28.675 | 27.056 | 2.527 |
2 | pastdiag | 28.184 | 26.663 | 1.372 |
3 | pastdiag | 28.276 | 26.682 | 1.446 |
1 | tastdiag | 31.562 | 29.245 | 3.236 |
2 | tastdiag | 30.453 | 28.413 | 1.871 |
3 | tastdiag | 30.368 | 28.311 | 1.896 |
type | real | user | sys | summary |
---|---|---|---|---|
pastdiag | 28.38 | 26.80 | 1.78 | avg |
tastdiag | 30.79 | 28.66 | 2.33 | avg |
pastdiag | 0.20 | 0.17 | 0.50 | avg dev |
tastdiag | 0.51 | 0.39 | 0.60 | avg dev |
The report data types are rather bloated, eg: using Int128
for
enum high
/low
values.
For diagnostics, it's best to capture very little data, and then query within the rendering layer. Temper this where querying is reproducing complex analysis, reconstituting heavy context, or keeping data large data sets alive just so it can be queried later.
It's very possible we don't clean-up the symbol table of abandoned analyses. This likely results in space leaks, or rather an overhead while processing each module.
This style of conditional 0 < foo
is forbidden, it's just not
accessible code and prone to reasoning and ultimatley logic errors.
When doing inline error state representation in a data type, such as
TLineInfo
, where the invalid state is
TLineInfo(line: 0, col: -1, fileIndex: InvalidFileIdx)
, the default
initiatlization of objects is not our friend. This became a minor issue
during nkError
construction. Where the line info on the provided
wrongNode: PNode
parameter might be invalid. Hard to say, as the
default initialization is technically a valid location. A heuristic was
put into place to work around it and I think it'll hold until we fix it
properly. I see this as a language design problem, now one could argue
that a better selection of invalid state is required, but when working
in a retroactive case such as this, it's a non-starter. Furthermore, the
ergonomics of {.requiresInit.}
leave much to be desired. I'm not sure
what exactly the answer is, but I think this is the difference between
something very primitive like a struct
vs an actual object
.
A number of node consts sets in ast_query
and related modules are not
defined as compositions of each other. This can easily lead to changes
that only impact partial parts of the compiler and introducing yet more
bugs. This might be worth of a compiler style guide remark.
Just like a large amount of imports is likely a source of issues, so are
broad exports, avoid these like the plague. Removing these from
reports
and related modules resulted in a number of significant and
hard to debug errors -- first encountered within CI, via a bors try
.
They manifested as a doc gen build failure, which was seen as an
undefined symbol error. All because reports
and friends were
exporting large swaths of other modules, eg: ast_types
. After much
debugging and fixing error diagnostics to provide recursive dependency
detection as part of those unindentifier errors, it was fixed. It wasted
an entire day. Exports just create massive public surfaces, don't do it.
Finally, this commit impacts a very wide swath of the compiler, lots of code had to be updated, along with many stylistic fixes. What follows is a list of more detailed changes, in no particular order:
ast_types
/nkError
higlights:
- defines all its own diagnostic data types for
nkError
- design-wise,
nkError
nodes are now much more likely to contain the immediate ast they're taking the place of, like a true wrapper. This should allow for easy recovery by simply gettingn.diag.wrongNode
. - call mismatch related types (
SemCallMismatch
andSemDiagnostics
) moved toast_types
- moved
NodeId
intoast_types from
packed_ast, so it can be used more broadly, such as in
PAstDiag` as mentioned above. - Removed
ReportID
, insteadPAstDiag
uses theNodeId
of the first error node they're embedded inas their
diagId. This is an easy way to have a monotonic sequence, while also some correlation between diag and error node. Due to copies, the
diagIdand
PNode
ast
procs like newNode
no longer have to care about sons
on
nkError
nodes, this also stops accidental traverls.
ast_query
literal node kind const
as sets
Now, various const set ranges relating to literals were defined
independently, now only the smallest sets are, while bigger sets are
defined based on the smaller sets. Going forward, this should ensure
adding new node kinds updates all broader ranges.
means: nkLiterals* = nkIntLiterals + nkFloatLiterals + nkStrLiterals
Extracting a style guide remark out of the commit message would be a
good idea.
msgs
now bridges translating PAstDiag
to legacy Report
for routing
and rendering. Although, routing should probably not be part of Report
stuff.
Moved illformed ast message creation routines earlier in msgs
to
reuse them when generating legacy report message strings. String
generation should move to a rendering layer in the future.
More consistent VM/Gen event to AST diag mapping, mostly within
compilerbridge
which now has simplified/fewer pathways.
Removed traceReason
from VM stacktrace events
This includes legacy reports, the primary motivation is that it was
being captured and not used. Additionally, not all stacktraces can be
related back to a meaningful vm event.
Also, the fixing up of data types also resulted in some code
simplification in vmgen and sem.
Clean-up identifier errors and fix vm err location
- cleaned up various expected identifier messages
- vm errors now capture instantiation info correctly
- errors don't set location data on diagnostics if already present, to honour overrides
Diag mapping in vm
and vmgen
was updated.
Mapping vm
and vmgen
events to PAstDiag
are presently in their
respective modules as they both directly depend upon ast_types
. Not an
ideal situation, but a lot more refactoring is required until vm
is
free of AST knowledge/dependencies.
options
module now manages ReportSet
as a simple collection of
NodeId
s of diagnostics that have been reported.
Removed ConfigRef
param from types.typeMismatch
, it wasn't being
used
Better error for is
with wrong number of args:
- previously:
wrong number of arguments
- now: `'is' operator takes 2 arguments'
- fixed
tests/errmsgs/tmisc
test to match wording
Undeclared identifier errors now output any recursive modules imports that were detected as those can be a cause of such errors.
Recursive dependency tracking from the importer is cheaper, as it now only stores FileIndex pairs. Unfortunately, we don't have a clearing mechanism, so new minor space leak. :/
removed some eager diag message data querying
Fixed doc gen error report rendering, which weren't outputing the full text, making it impossible to find where errors/hints/warnings were occurring. Without this fix, it meant an unclosed backtick in a doc block was breaking CI... cool.
Removed some bounds tracking in reports
A number of Report
s are tracking, but never using a pair of Int128
in order to know out of bound issues for arrays, ranges, ordinals, etc.
The data is rarely if ever output in messages and that's a lot of
bytes in most cases. Disabled whereever this was inconvenient, it can be
restored for error messages we wish to improve as future work.
Created rsemBigOrdsEnergy
to not overload mismatchCount
tuple with
Int128
bloat, and moved the following reports there:
rsemSetTooBig
rsemArrayExpectsPositiveRange
rsemInvalidOrderInEnum
(partial list) No longer capturecountMismatch
data for these as it's unreported:rsemExpectedLow0Discriminant
rsemExpectedHighCappedDiscriminant
(partial list)
The errorKind
property now returns a TAstDiagKind
Errors related to positions in calls where identifiers are expected have been updated to provide a bit more context:
- dot calls, callable fields access, etc errors wording update,
indicating the problematic identifier and then the call expression
within which it was found. See:
rsemExpectedIdentifierWithExprContext
handling incli_reporter
- updated
astrepr
to handle newdiag
field ofnkError
variant Resulting in the following test changes: nimsuggest
testtchk1
updated for messages with "found" wording instead of "got"
countMismatch
in reports are now int
s, instead of Int128
,
seiously, how many were we expecting?
The string value of mSizeOf
as "sizeOf"
was taken from VM tests, but
doesn't jive with other tests that serialize the same magic. Need to
figure out which convention to go by.
concept
and {.explain.}
error msg regression
The compiler presently doesn't emit useful diagnostics for these,
simply a count of number of diagnostics failed. The implementation is
tied up with structuredReportHook
, which in turn uses Report
, and
there isn't a reasonable way to turn this into PAstDiag
for
consumption. The following tests are disabled knownIssue
:
- tests/lang_experimental/concepts/t3330
- tests/lang_experimental/concepts/texplain
- tests/lang_experimental/concepts/twrapconcept
Noted issues with the reports
module and friends, hopefully it wards
off any further profileration and people can help with incremental
rework.
types.semReportTypeMismatch
no longer takes a ConfigRef
parameter.
This turned out to be unnecessary/unused after all the diag changes.
sigmatch
has fewer dependencies on reports_sem
getReport
moved to msgs
, dropped conf
param
Creating a new use qualifier diagnostic via
semstmts.newSymChoiceUseQualifierDiag
will assert that there are at
least two choices to avoid spurious errors.
Removed a number of compile warnings by removing unused imports.
updated astrepr
to use ast diagnostics from nkError
nodes.
Reduce broad exports by Report
related modules:
reports
was leakingast_types
everywherereports_base
had overlapping exports
Formatting/Style clean-ups in these modules
Random Report Changes:
- remove
rsemUserRaw
, it's never used - renamed
rrsemCompilesError
torsemComplesHasEffects
Special thanks to zerbina for all the code reviews and suggestions!
Co-authored-by: zerbina [email protected]
Who tf wrote this, and why? | Biggest stability update ever
Bro, what the fuck is this bullshit? I swear imma go cry if I see more shit like that...
Remove histogram color profile
Aside from being coded with the ass (entirely copy-pasted), the histogram color profile is useless since the histogram is grabbed in display RGB. If we want to display the histogram in pipe RGB, then we can convert display to pipe RGB but if RGB got clipped in display, then we convert clipped signal.
The whole thing is misleading and was used in overexposed module, which completely voids its purpose. Say you display histogram in Rec 2020 (super large), but your display is sRGB (super small), then you clip in sRGB at 100%, but converting back to Rec 2020, your 100% becomes 90 % or something, and the scope shows no overexposure at all.
That's bullshit coded by idiots. Not sure why it took me 4 years to spot it.
[MIRROR] Basic Mob Carp: Retaliate Element [MDB IGNORE] (#18030)
- Basic Mob Carp: Retaliate Element (#71593)
Adds an Element and AI behaviour intended to replicate the "retaliate" behaviour which made up an entire widely-populated subtype of simple mobs. The behaviour is pretty simply "If you fuck with me I fuck with you". Mobs with the component will "remember" being attacked and will try to attack people who attacked them, until they lose sight of those people. They don't have very long memories so breaking line of sight is enough to remove you from their grudge list. The implementation unfortunately requires registering to 600 different "I have been attacked by X" signals but c'est la vie.
It will still be cleaner than
/mob/living/simple_animal/hostile/retaliate/clown/clownhulk/honcmunculus
and mob/living/simple_animal/hostile/retaliate/bat/sgt_araneus
.
I attached it to the pig for testing and left it there because out of all the farm animals we have right now, a pig would probably get pissed off if you tried to kill it. Unfortunately it's got a sausage's chance in hell of ever killing anyone.
It doesn't have much purpose yet but as we make more basic mobs this is going to see a lot of use.
🆑 add: Basic mobs have the capability of being upset that you kicked and punched them. add: Pigs destined for slaughter will now ineffectually attempt to resist their fate, at least until they lose sight of you. balance: Bar bots are better at noticing that you're trying to kill them. /🆑
- Basic Mob Carp: Retaliate Element
Co-authored-by: Jacquerel [email protected] Co-authored-by: tastyfish [email protected]
sched/core: Fix ttwu() race
Paul reported rcutorture occasionally hitting a NULL deref:
sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM
Debugging showed that this only appears to happen when we take the new code-path from commit:
2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")
and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
this would've unconditionally hit:
smp_cond_load_acquire(&p->on_cpu, !VAL);
and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.
The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.
Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.
The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):
X->cpu = 1
rq(1)->curr = X
CPU0 CPU1 CPU2
// switch away from X
LOCK rq(1)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 9
switch_to(Z)
X->on_cpu = 0
UNLOCK rq(1)->lock
// migrate X to cpu 0
LOCK rq(1)->lock
dequeue_task(X)
set_task_cpu(X, 0)
X->cpu = 0
UNLOCK rq(1)->lock
LOCK rq(0)->lock
enqueue_task(X)
X->on_rq = 1
UNLOCK rq(0)->lock
// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
X->on_cpu = 1
UNLOCK rq(0)->lock
// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb(); // wake X
ttwu()
LOCK X->pi_lock
smp_mb__after_spinlock
if (p->state)
cpu = X->cpu; // =? 1
smp_rmb()
// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 0
if (p->on_rq)
smp_rmb();
if (p->on_cpu && ttwu_queue_wakelist(..)) [*]
smp_cond_load_acquire(&p->on_cpu, !VAL)
cpu = select_task_rq(X, X->wake_cpu, ...)
if (X->cpu != cpu)
switch_to(Y)
X->on_cpu = 0
UNLOCK rq(0)->lock
However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.
(Most of the previous ttwu() races were found on very large PowerPC)
Nevertheless, this fully explains the observed failure case.
Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected]
The post function is a doozy it takes you back harkening to the days of web development. The post function remains unsolved, too. Todo: Fix the Render Deployment. Although we've already got the deployment https://the-mangrove.onrender.com/ Vercel App https://themangrove.vercel.app/..integrate the backend! And what is the difference between deployments? With the data for South Carolina, :5000/posts:1 let's try to get the :5000/posts route working. It is obvious that are we reaching the database? No. "This new file is a pretty huge pain to format but thank God the SC Sec State Office had all the info. Sorry, SC Association of Counties."
"And then in the same breath" we ⌘+F search for Curtis Loftis however it is obvious that we can see where we are with this API thing....it's not just the Curtis L appearing from a file other than the public/data/states/South Carolina/County/SouthCarolinaCountyEO.json; in that folder, well the front-end data is coming from the public/data/treasurers_11.30.22.json folder-file. We have got the posts.js file!
What is the URL provided in the global context? It is null. "posts.js?aad3:21" where we cannot destructure the property 'data' of '(intermediate value)' as it is undefined. What is the difference between .next/server/pages and ./pages? On this.state
, the postData: null
is set and is never not found to be null. We can set the post data onChange.
Todo: fix the <TextField name="message" variant="outlined" label="Message" fullWidth
// value={postData.message}
onChange={
(e) => this.setPostData({ ...this.state.postData, message: e.target.value })
} /> on pages/users.js, where we set the post data to state. We are able to link up with MongoDB! We are able to fetch from the database. It's the de-structuring of (e) => this.setPostData({ ...this.state.postData, message:e.target.value })
and that this.setState({ postData: {data: Whatisit } })
that allows us to build a POST body with the nested "call-back hell".
"When I did it on my end, there were no issues!" Yeah, don't copy the code and structure of the stuff we are doing a merge, of front-end & back-end servers, into one server. That is going to be something! :) How do you get to the end of collections in MongoDB? By right clicking? You know it's a front-end "Circuit". So the issue comes from the front-end and not the back-end. It's "bizarre" and "comes from the front-end". There's just a back-end that "needs to" receive a neatly formatted object, what exactly is causing the issue, it'll be faster to try building a post function from the ground up. I was not able to resolve the issue, don't forget that this is the source: https://www.youtube.com/watch?v=VsUzmlZfYNg&t=7633s. It's long and involves somebody's post function. If you Cannot destructure property 'data' of '(intermediate value)' as it is undefined.
have you ever seen that error before?
If you can figure out how this version works, you might be able to figure out how to get it to work with the other page. Since the only thing that should be different is the front-end which is causing the issues unfortunately, it's possible that we are posting to the database and that the "destructuring of undefined" is just a "side effect". My IP is white-listed in MongoDB, the data is being stored in the database and we were able to get a "third" test case on the database, so as long as we run the memories_project/server alongside Da-Repo with npm start
and npm run dev
, we are able to POST to the database! Now we just need to fetch the messages, images and all, from the database. Thank you Stack Overflow. In the original code there was something in place to fetch the data.
So POST is working but GET isn't it seems. It seemed insurmountable and so congratulations. Within one thread, we started out with nothing & made a massive stride.
. then we can work on a proper login/signup
. that's some more lovely data stuff hahaha
. gonna need some good security and identity verification esp for politicians signing up
. Remember how to Git LFS track files, git lfs install, git lfs track ".pdf", Failing to push large files · Issue #1933 · git-lfs/git-lfs
git-lfs/git-lfs#1933 Git lfs - "this exceeds GitHub's file size limit of 100.00 MB" - Stack Overflow
https://stackoverflow.com/questions/33330771/git-lfs-this-exceeds-githubs-file-size-limit-of-100-00-mb (versus filter-repo?)
Git Large File Storage | Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.
https://git-lfs.com/
You can also git lfs untrack ".pdf" and then "drag the file out of and back into the folder" to tell Git that it "was never" tracking the file in the first place!😛
"Bring all the efforts into the GET"
for the love of fucking god githubWHY DID YOU ADD THAT
[SQUASHED] core: Blacklist pixel system feature from Google Photos
We want to include the P21 experience flag to enable new features,
however it seems like Google Photos uses it to decide whether to use the
TPU tflite delegate. There doesn't seem to be any fallback so we need to
make sure the feature is not exposed to the app so that a normal
NNAPI/GPU delegate can be used instead.
Test: Google Photos editor with PIXEL_2021_EXPERIENCE feature in product
Signed-off-by: Kuba Wojciechowski <[email protected]>
Change-Id: I51a02f8347324c7a85f3136b802dce4cc4556ac5
commit 67eb31b3bb43d06fcc7f6fdb2f92eb486451cae6 Author: kondors1995 [email protected] Date: Thu Jun 9 17:39:25 2022 +0530
Core: Extend Pixel experience Blacklist For Google Photos
Turns out having these brakes Original quality backups.
Since these indicate that the device is pixel 4 with in the turn brakes device spoofing as OG pixel
Change-Id: I336facff7b55552f094997ade337656461a0ea1d
commit 508a99cde60b73dc3f1e843d569bca31def35988 Author: ReallySnow [email protected] Date: Fri Dec 31 16:40:23 2021 +0800
base: core: Blacklist Pixel 2017 and 2018 exclusive for Google Photos
* In this way can use PixelPropsUtils to simulate the Pixel XL prop
method to use the unlimited storage space of Google Photos
* Thanks nullbytepl for the idea
Change-Id: I92d472d319373d648365c8c63e301f1a915f8de9
commit aaf07f6ccc89c2747b97bc6dc2ee4cb7bd2c6727 Author: Akash Srivastava [email protected] Date: Sat Aug 20 19:04:32 2022 +0700
core: Pixel experience Blacklist For Google Photos for Android 13
* See, in Android 13 pixel_experience_2022_midyear was added, which needs to be blacklisted aswell
Change-Id: Id36d12afeda3cf6b39d01a0dbe7e3e9058659b8e
commit 9d6e5749a988c9051b1d47c11bb02daa7b1b36fd Author: spezi77 [email protected] Date: Mon Jan 31 19:17:34 2022 +0100
core: Rework the ph0t0s features blacklist
* Moving the flags to an array feels more like a blacklist :P
* Converted the flags into fully qualified package names, while at it
Signed-off-by: spezi77 <[email protected]>
Change-Id: I4b9e925fc0b8c01204564e18b9e9ee4c7d31c123
commit d7201c0cff326a6374e29aa79c6ce18828f96dc6 Author: Joey Huab [email protected] Date: Tue Feb 15 17:32:11 2022 +0900
core: Refactor Pixel features
* Magic Eraser is wonky and hard to
enable and all this mess isn't really worth
the trouble so just stick to the older setup.
* Default Pixel 5 spoof for Photos and only switch
to Pixel XL when spoof is toggled.
* We will try to bypass 2021 features and Raven
props for non-Pixel 2021 devices as apps usage
requires TPU.
* Remove P21 experience system feature check
Change-Id: Iffae2ac87ce5428daaf6711414b86212814db7f2
TES Queries for TN Discharge Opportunity (Recidiviz/recidiviz-data#16781)
This PR writes the criteria queries for the TN Upcoming Discharge Workflows opportunity. The changes are broadly:
- Some updates to
sentences_preprocessed
andus_tn_sentences_preprocessed
to access relevant fields like whether someone is onlifetime_supervision
- Turning
supervision_past_full_term_completion_date
into a critical query fragment so it can be called with different # of days in different criteria queries (e.g. 0 or 30 days) - Creating additional criteria queries for this opportunity specifically:
not_on_life_sentence_or_lifetime_supervision.py
no_zero_tolerance_codes_spans.py
supervision_past_full_term_completion_date_or_upcoming_30_day.py
and creating a TN-specificus_tn_supervision_projected_completion_date_spans.py
to support this.
A few notes on validating the sandbox output in
recidiviz-123.dsharm_16285_task_eligibility_spans_us_tn.complete_full_term_discharge_from_supervision_materialized
against the snapshot query in us_tn_compliant_reporting_logic
where we
originally calculated who was upcoming/overdue for discharge.
- The small number of people
tes_eligible_missing_in_snapshot
are people whose supervision level isUNSUPERVISED
so they don't show up in the snapshot query (based on the Standards sheet) but should still be considered eligible - Of the 185 people who are
tes_elig_inelig_in_snapshot
, ~62 are cases where the TES expiration date is sooner than the snapshot expiration date. See Recidiviz/recidiviz-data#17033 which fixes most of these 62 cases. For the remainder 123 cases, the discrepancy is caused by the fact that in the snapshot query, we consider someone ineligible if they have zero tolerance codes after their most recent sentence effective date; in TES we're considering someone ineligible if they have zero tolerance codes after their most recent sentence imposed date. I don't have a strong opinion on which of these dates is more correct to use, but given that zero tolerance codes are a proxy for missing sentencing data, it feels reasonable to usedate_imposed
- The final 2 categories (snapshot elig and missing/ineligible in TES) are largely composed of 2 categories. A few of these are people who have been crudely excluded temporarily because they have data in ISC and Diversion sentences tables; once Recidiviz/recidiviz-data#16709 is done, the crude hacks to remove these folks can be removed and improve this discrepancy. The rest are largely because the snapshot query is flagging several people whose latest supervision session started after their latest sentence expiration date. In all likelihood, this means the people aren't actually overdue/upcoming discharge, but are missing sentencing info. Because of how TES queries are structured, we don't include these people.
All pull requests must have at least one of the following labels applied (otherwise the PR will fail):
| Label | Description | |----------------------------- |----------------------------------------------------------------------------------------------------------- | | Type: Bug | non-breaking change that fixes an issue | | Type: Feature | non-breaking change that adds functionality | | Type: Breaking Change | fix or feature that would cause existing functionality to not work as expected | | Type: Non-breaking refactor | change addresses some tech debt item or prepares for a later change, but does not change functionality | | Type: Configuration Change | adjusts configuration to achieve some end related to functionality, development, performance, or security | | Type: Dependency Upgrade | upgrades a project dependency - these changes are not included in release notes |
Closes Recidiviz/recidiviz-data#16285
This box MUST be checked by the submitter prior to merging:
- Double- and triple-checked that there is no Personally Identifiable Information (PII) being mistakenly added in this pull request
These boxes should be checked by the submitter prior to merging:
- Tests have been written to cover the code changed/added as part of this pull request
These boxes should be checked by reviewers prior to merging:
- This pull request has a descriptive title and information useful to a reviewer
- This pull request has been moved out of a Draft state, has no "Work In Progress" label, and has assigned reviewers
- Potential security implications or infrastructural changes have been considered, if relevant
GitOrigin-RevId: 25a2f89b8f6b32882c45aee69b890448c8af6715
Adds the DNA Infuser, a genetics machine you feed corpses to infuse their DNA with yours! What could go wrong?! (#71351)
Adds the "DNA Infuser" to genetics. One person enters, a corpse is added to the machine, and you can activate the machine to "infuse" the subject with the DNA. This converts one random organ from a set into the mob-related organ.
Rats can be fed in to turn you into a rat-creature-thing!
+See better in the dark
+Can pretty much eat anything! Toxic foods, gross foods, whatever works!
+Smaller, and can climb tables
?Randomly squeaks occasionally?
-Take twice as much damage
-Vulnerable to flashes
-Gets hungry MUCH quicker.
-Yes, eat anything, but only ENJOY dairy.
Having every rat organ at once allows you to ventcrawl nude!
Carp work for a mutation as well!
+Strong jaws, that drop teeth over time!
+Space immunity! Breathe in space, unbothered by pressure or cold!
+Smaller, and can climb tables
-Can't block your jaws with a mask
-Can't take the heat, overheats easily
-Can only breathe in environments that have minimal or no oxygen
-Nomadic. If you don't enter a new zlevel for awhile, you'll start feeling anxious.
Having every carp organ at once allows you to swim through space!
Any corpses without organs to turn into turn into fly organs! Fly organs now have a bonus for collecting them all, transforming you into a fly, when you pass the threshold. But even without those, fly organs are technically... organs. They most of the time work like normal ones.
- Finish the infuser code
- Create a little booklet that shows what kind of shit you can turn into, hopefully i can autogenerate this based off of organ set subtypes list
- sprite/slap a color on rat mutant organs
- Maybe make a few more organ sets
Oops, I forgor to fill this out! My hackmd is here.
https://hackmd.io/@bazelart/ByFkhuUIi
🆑 Tralezab code, Azlan + Azarak (Az gaaang) for the organs add: Added the DNA infuser to genetics! Person goes in, corpse goes in, and they combine! add: Try not to turn yourself into a fly, OK? /🆑
Co-authored-by: Fikou [email protected] Co-authored-by: MrMelbert [email protected]
omggggg prototype done! Love u babe doriaaaa
QAQQQQQ It workssss, the chat workinggggg. holy shit!!!!!
sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
commit de53fd7aedb100f03e5d2231cfce0e4993282425 upstream.
It has been observed, that highly-threaded, non-cpu-bound applications running under cpu.cfs_quota_us constraints can hit a high percentage of periods throttled while simultaneously not consuming the allocated amount of quota. This use case is typical of user-interactive non-cpu bound applications, such as those running in kubernetes or mesos when run on multiple cpu cores.
This has been root caused to cpu-local run queue being allocated per cpu bandwidth slices, and then not fully using that slice within the period. At which point the slice and quota expires. This expiration of unused slice results in applications not being able to utilize the quota for which they are allocated.
The non-expiration of per-cpu slices was recently fixed by 'commit 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")'. Prior to that it appears that this had been broken since at least 'commit 51f2176d74ac ("sched/fair: Fix unlocked reads of some cfs_b->quota/period")' which was introduced in v3.16-rc1 in 2014. That added the following conditional which resulted in slices never being expired.
if (cfs_rq->runtime_expires != cfs_b->runtime_expires) { /* extend local deadline, drift is bounded above by 2 ticks */ cfs_rq->runtime_expires += TICK_NSEC;
Because this was broken for nearly 5 years, and has recently been fixed and is now being noticed by many users running kubernetes (kubernetes/kubernetes#67577) it is my opinion that the mechanisms around expiring runtime should be removed altogether.
This allows quota already allocated to per-cpu run-queues to live longer than the period boundary. This allows threads on runqueues that do not use much CPU to continue to use their remaining slice over a longer period of time than cpu.cfs_period_us. However, this helps prevent the above condition of hitting throttling while also not fully utilizing your cpu quota.
This theoretically allows a machine to use slightly more than its allotted quota in some periods. This overflow would be bounded by the remaining quota left on each per-cpu runqueueu. This is typically no more than min_cfs_rq_runtime=1ms per cpu. For CPU bound tasks this will change nothing, as they should theoretically fully utilize all of their quota in each period. For user-interactive tasks as described above this provides a much better user/application experience as their cpu utilization will more closely match the amount they requested when they hit throttling. This means that cpu limits no longer strictly apply per period for non-cpu bound applications, but that they are still accurate over longer timeframes.
This greatly improves performance of high-thread-count, non-cpu bound applications with low cfs_quota_us allocation on high-core-count machines. In the case of an artificial testcase (10ms/100ms of quota on 80 CPU machine), this commit resulted in almost 30x performance improvement, while still maintaining correct cpu quota restrictions. That testcase is available at https://github.com/indeedeng/fibtest.
Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition") Signed-off-by: Dave Chiluk [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Reviewed-by: Phil Auld [email protected] Reviewed-by: Ben Segall [email protected] Cc: Ingo Molnar [email protected] Cc: John Hammond [email protected] Cc: Jonathan Corbet [email protected] Cc: Kyle Anderson [email protected] Cc: Gabriel Munos [email protected] Cc: Peter Oskolkov [email protected] Cc: Cong Wang [email protected] Cc: Brendan Gregg [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman [email protected] Signed-off-by: Ratoriku [email protected] Signed-off-by: Peppe289 [email protected] Signed-off-by: RyuujiX [email protected]
feature #5066 Allow Translatable
objects in addition to string
in translated context (Lustmored)
This PR was squashed before being merged into the 4.x branch.
Allow Translatable
objects in addition to string
in translated context
This PR is pretty massive, yet almost all of it's code changes are just enablers for features that are already in Symfony Forms (5.4+) and Symfony Translation (also 5.4+). It allows passing Translatable
objects as labels and other parts.
Currently my main problem with EasyAdmin is translation extraction. I maintain pretty large project where translation extraction is build into workflow very tightly and using manual extraction is unmaintainable. Fortunately most translations in admin context have no parameters, so I can workaround that by doing:
yield TextField::new('name', (string) t('Client name'));
But that's just a dirty hack and works only when label needs no parameters to translate properly. This is why I would benefit greatly if EasyAdmin would simply allow those objects internally and I think other users would welcome it too 😃
I have tested those changes on real life projects and they worked like a charm 😄
As stated before most of the changes are just enablers. By just changing some signatures and adding very simple logic whenever EasyAdmin translates content I was able to pass Translatable
objects to templates and Symfony Forms, where they handle it without any additional work.
Functional backwards compatibility is kept. By that I mean - if project uses strings in those contexts (or leaves them empty for Easy Admin to fill with default values), no incompatibility arises. Setters accept strings as before and getters will return those strings. Also - everything will be translated, as before.
Unfortunately the same cannot be said about class signatures. Summary of signature changes are as follows:
Final classes with signature changes:
- Config\Action (new, setLabel); only docblocks and deprecation logic
- Config\Menu*MenuItem (constructors)
- Config\MenuItem (linkTo*, section, subMenu)
- Dto\ActionDto (getLabel, setLabel and private field)
- Dto\CrudDto (getEntityLabelInSingular, setEntityLabelInSingular,getEntityLabelInPlural, setEntityLabelInPlural, setCustomPageTitle, getHelpMessage, setHelpMessage)
- Dto\FieldDto (getLabel, setLabel, getHelp, setHelp)
- Dto\FilterDto (getLabel, setLabel); only docblocks
- Dto\MenuItemDto (getLabel, setLabel)
- Field*Field (new); only docblocks
- Field\FormField (addPanel, addTab)
Non-final classes with signature changes:
- Config\Crud (setHelp)
- Field\FieldTrait (setLabel, setHelp); setLabel only in docblock
I wouldn't consider signature changes in setters in final classes as BI, but getters are - end user code might expect getter to return ?string
, while this PR changes it to TranslatableInterface|string|null
. Again - in simple use case, where user is not using Translatable
objects this assumption will still hold. But libraries, bundles and other code does not have such guarantee.
Also one non-final class and commonly used trait have signature changes in parameter types that will raise errors when inherited.
I don't see any way we can achieve the same without breaking BC, therefore I think this change can only target 5.0
. But I'd love to hear from the others :)
- get feedback
- write tests for functional changes (probably just translating part, there is no point in testing getters and setters IMO)
- Add UPGRADE/CHANGELOG entry documenting changes
7596f24f Allow Translatable
objects in addition to string
in translated context
multiple updates and changes
8 files changed, 869 insertions(+), 542 deletions(-) diary file added, not sure read it in info doc, so added run down of changes to config.org
@@ -1,26 +1,30 @@ => added a preamble type drawer for aesthetics,
@@ -29,20 +33,24 @@ => toggle debugger in menu, so removed for non-use
- moved add-load-path up into package manager section, this just makes more sense
@@ -59,7 +67,7 @@ package management => bold looks better
@@ -78,6 +86,7 @@ package management => have not declared this anywere so thought it couldn't hurt to have it declared.
@@ -95,103 +104,135 @@ => *note: realized putting the link to the github page of the package that the headline refers to only makes sense.Will start filling more as needed.
- last update uncentered somethings thought of move command up the chain would help but no
- next replaced opening browser to opening agenda on the "nav buttons"
- As well as one to add journal entry
- needed to reorder thing since emacs stoped resetting the cursor after a doom sync needed "restart" at the end for one extra tab to get there
- removed some old setting not being used
- added elfeed to the +doom/dashboard or fallback dashboard, more useful then what is already there.
- while looking into how to add too the menu. Came across a deprecated command. So changed that as well
- ;;;; org-settings ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-
git has decided to clump this all together so I'll put the divider
-
first added link under header to org-manual just makes sense
-
changed diary-file should not have been an .org file put in vc folder
-
the next several sections are the tweaking and upgrading of the journal-file, everything from capture templates to agenda interacting, as well as used the customize-ui to add minor-modes like auto-fill-mode needed to change the carry over so it came be used to schedule things with a to-do and time stamp all from the comfort of a org-capture-template. So it is upgraded to be more useful
-
after that is a clean up of some org keybindings, less clutter
@@ -209,20 +250,20 @@ => here we are learning I think. getting more familiar with the customise-ui and did more reading. I was told not to abuse the add-to-list command, and realizes that they were being stored in the costume-file, to add to everything. the org-capture-templates section is getting huge, and remember trying to put the whole thing in config. Emacs complained hard and was super slow in starting. so before adding more, removed some older ones. there still there just not were I see them all the time.
- so added a new one to schedule app. in the journal and be included in agenda
- next one I came across in my research, for keeping bookmarks of web sites
@@ -234,54 +275,54 @@ => this mostly clean up and aesthetics
@@ -320,8 +361,9 @@ => adding links to relevant info @@ -333,6 +375,7 @@ @@ -352,6 +395,7 @@
@@ -363,14 +407,16 @@ => update keychords add new ones remove others
@@ -433,13 +480,13 @@ => corfu revisted, updated, straighten out
@@ -485,15 +532,20 @@ => added corfu-history to the mix, to make completions more efficient. In theory, will see.
@@ -516,10 +568,13 @@ => clean up
@@ -557,6 +612,8 @@ add links @@ -580,6 +637,7 @@
@@ -784,22 +842,22 @@ change to emacs to manage this package, fixed parend.
@@ -829,10 +887,20 @@ => add function for commenting and uncommenting
@@ -916,27 +983,53 @@ => add language-tool prose-linter or grammar checker, add keybindings, could not get intergraded in to flycheck.
@@ -1049,93 +1142,112 @@ => MPV work-in-progress have been monkeying around with this for along time, was not getting what what I wanted and I thought should be easy.... multiple revisions, still not finalized
@@ -1188,14 +1300,14 @@ Elfeed => got rid of goodies (deprecated) noticed it was the thing giving the "cl deprecated error" at start up
@@ -1229,10 +1342,27 @@ Elfeed => added two functions
- one is declutter
- this was to try and use in elfeed-show-mode ?? not sure
@@ -1589,9 +1723,26 @@ => added function to grab youtube-subtitles
Move to AGP 3.0.0 stable 😁
Finally after many months of pain, suffering and tears we get to move to the stable version of AGP even it this is going to last us a short while until we need the next bleeding edge AGP feature in support library to make our lives gloomy yet again.
For now - rejoice!
Test: ./gradlew buildOnServer Change-Id: Ia6e1f7a8ecc4ecc4e79283d970d6defbd70828c6
[MIRROR] Changes our map_format to SIDE_MAP [MDB IGNORE] (#18070)
- Changes our map_format to SIDE_MAP (#70162)
This does nothing currently, but will allow me to test for layering issues on LIVE, rather then in just wallening. Oh also I'm packaging in a fix to one of my macros that I wrote wrong, as a joke
removes SEE_BLACKNESS usage, because we actually cannot use it effectively
Sidemap removes the ability to control it on a plane, so it basically just means there's an uncontrollable black slate even if you have other toggles set.
This just like, removes that, since it's silly
Offsetting the vis_contents'd objects down physically, and then up visually resolves the confliciting that was going on between the text and its display.
This resolves the existing reported flickering issues
fixes plated food not appearing in world
pixel_y'd vis_contents strikes again. It's a tad hacky but we'll just use pixel_z for this
Adds wall and upper wall plane masters
We use these + the floor and space planes to build a mask of all the visible turfs. Then we take that, stick it in a plane master, and mask the emissive plane with it.
This solves the lighting fulldark screen object getting cut by emissives Shifts some planes around to match this new layering. Also ensures we only shift fullscreen objects if they don't object to it.
compresses plane master controllers
we don't use them for much rn, but we might in future so I'm keeping it as a convienince thing
🆑 refactor: The logic of how we well, render things has changed. Make an issue report if anything looks funky, particularly layers. PLEASE USE YOUR EYES /🆑
Co-authored-by: Mothblocks <35135081+Mothblocks@ users.noreply.github.com>
-
Changes our map_format to SIDE_MAP
-
Modular!
Co-authored-by: LemonInTheDark [email protected] Co-authored-by: Mothblocks <35135081+Mothblocks@ users.noreply.github.com> Co-authored-by: Funce [email protected]
Run uStreamer via a launcher (#89)
This changes the uStreamer installation so that the uStreamer systemd service runs a uStreamer launcher script rather than running uStreamer directly.
The launcher script, at runtime, reads a set of configuration files, translates those configuration files to uStreamer's command-line flags, and launches uStreamer with those flags.
In the previous implementation of this role, clients needed to re-run Ansible in order to change any of uStreamer's settings (e.g., frame rate, quality) because Ansible was responsible for generating the command-line string to launch uStreamer.
In this implementation, clients can change uStreamer's settings without using Ansible. Clients can simply change settings in the uStreamer launcher's config files and restart the uStreamer service. When the service restarts, the uStreamer launcher will pick up the new configuration.
This reduces the time to change video settings by 91%.
See a rough proof of concept in TinyPilot Pro: https://github.com/tiny-pilot/tinypilot-pro/pull/701
files/launch
deliberately doesn't use Ansible role variables because I want to make it easy to move this script to Debian packages in the future.- The design of having the config files as ordered files in
/opt/ustreamer-launcher/configs.d/
is to make it easier to translate the logic to Debian packages in the future. - The uStreamer Debian package will be responsible for placing
/opt/ustreamer-launcher/configs.d/000-defaults.yml
. - The TinyPilot Debian package will be responsible for placing
/opt/ustreamer-launcher/configs.d/100-tinypilot.yml
. - This is similar to how nginx works with
sites-enabled
so that different Debian packages can own files that affect uStreamer's behavior without colliding on trying to own the same files. - I'm leaving in a
debug
play, which we normally don't do, but I think we should take advantage of it more. - It makes it a lot easier to debug the role, so it's like logging that's just helpful even when we're not actively debugging.
- To avoid complicated coordination of merge order between this PR and
the TinyPilot PR, we're leaving in the
systemd-config
tags and tracking their removal in tiny-pilot/ansible-role-ustreamer#90
If we switch TinyPilot over to using this role (which should be pretty straightforward), we get a slew of benefits:
- Increases performance by 91%
- In my tests, changing uStreamer settings through TinyPilot's video settings dialog went from 17.9s to 1.6s
- Reduces our dependency on Ansible
- We are adding some more Ansible code here, but I wrote it so that it would be easy to translate to a uStreamer Debian package in the future.
- We're getting rid of a big dependency in that we no longer need Ansible to change uStreamer settings, which was an annoying obstacle before.
- Decreases fragility
- It used to be that the
/opt/tinypilot-privileged/scripts/update-video-settings
script pointed to a file in/opt/tinypilot-updater
, which was confusing and brittle. - We eventually want to get to a place where the installer files don't need to hang around after installation/update is finished, and this gets us closer to that.
- Backwards-compatible
- Code written for the previous version of the role should still work on this one, though it's easy for clients to take advantage of the new design.
- Values from
/home/tinypilot/settings.yml
will appear in both000-defaults.yml
and100-tinypilot.yml
- This isn't a problem with this role but more in how we've implemented the TinyPilot Ansible role on top of it.
- The TinyPilot role reads from
/home/tinypilot/settings.yml
and uses those values in the Ansible execution of the TinyPilot and uStreamer role. - That means if there are uStreamer variables in
/home/tinypilot/settings.yml
, they get copied to/opt/ustreamer-launcher/configs.d/000-defaults.yml
whenever the uStreamer role runs. - The TinyPilot Debian installer also
symlinks
/opt/ustreamer-launcher/configs.d/100-tinypilot.yml
to/home/tinypilot/settings.yml
because legacy installations wrote their settings tosettings.yml
, and it's hard to change at this point. - Our design accounts for this, as
100-tinypilot.yml
overrides settings in000-defaults.yml
, but it's still a bit untidy.
"10am. Yesterday night, I checked my work mail and see that I got a bite on the message I sent yesterday. So it is likely I'll get the docs for the Tenstorrent chips. Let me check out now if there was any follow through. Nope. Ah, whatever. Not being completely ignored is a win here. Maybe I'll get something later in the day. The message said he'll get me in touch with the sales team.
10:05am. Anyway, after yesterday's onslaught I deserve a little RnR today. Let me read manga, watch anime, take a bath. And then I'll work a tad on that article I mentioned. I'll send a link to the Juan Gomez Luna of that PIM course. Though I doubt it, maybe he'll find the UPMEM backend useful.
Maybe I'll rewrite the entire last section so it is more sane. Let me chill here.
10:10am. These AI companies being what they are, I won't place any stock in this contact until I get the docs from them. If I get that and the option to buy the chips, or even lease them on the dev cloud, and the programming model checks out, I will have concrete reason to get a job. There is a plan to just save up, but who cares about that. It is not my ambition to keep doing favors to other people forever. I want to progress in my path by masting better hardware as well as algorithms.
If Tenstorrent can act as a catalyst, then so much the better. If not, I'll spend a while writing Heaven's Key before going on the job hunt. The thought of it is stressing me out so I do not want to think about. I mean, I know it sounds like the same thing, but with a concrete goal to spend my money on, I'd be a lot more motivated to job.
Let me catch up with this, and then I will watch Oshimai, Spy Girls and whatever else I've accumulated.
11:35am. Bath done. Yeah, I think I get it now.
Remember how for years I was bad at roguelikes, but suddenly became good when I found fun in using my brain to predict the outcome of battles instead of playing them like they are supposed to be Diablo?
The same goes for job applications. I am incredibly tense and nervous during them. At some level I am aware that my mentality is fucked up, but I was in no state to mend myself. Getting back into programming after 1.5 years hiatus reminded me of how fun it is. At its root although I want money and success I am not motivated by it. I am a Singularity pursuer. As long as I have my path I can withstand any hardship. No amount of success would have helped me when I'd lost it.
The fact that I am now trying to get the Tenstorrent docs show that my mentality is trying to mend itself and that I am regaining my pride as a programmer. Back in late 2021 I was sick of programming, so even if I had the docs I would have thrown them away. Right now, I am again on the path of making small goals and accomplishing them.
A wizard should live to learn magic. A programmer should live to learn programming.
I should live for the sake improving my technique and grabbing new hardware. Investing? Money? That is for the uninitiated.
I did try to get a job a few times, but of course I would not get it when I was mindbroken at the time.
If you take a step back and think about it, it does not matter how this Tenstorrent thing plays out. Somebody will come out with the kind that I can use. In a year or two at most if not today. I need to make that my motivation as I put myself out there.
Once I get the hardware, I will make a Heaven's Key game. Just something to allow me to put the trained RL agents to use in a poker game.
11:45am. A few months. Let me take a few months to wrap Heaven's Key while on the side I bolster my will.
Get better hardware! That is the only motivation that I need right now. The is the only motivation a true programmer needs.
Come to think if, wasn't it like that when I was a kid? But back then I wanted better hardware to play games. Today I want them so I can program them. So maybe I haven't changed that much.
And that makes me glad.
Let me get breakfast.
12pm. HAving breakfast and watching Oshimai, while my thoughts are on Spiral. Right now the ref counting backend should be rock solid. 2.3.7 should be the complete version of Spiral, the end of an era. That long journey that I started in late 2016 to make a language will reach its second apex here.
Living like this is not so bad.
It is as I said in the story. Getting a better PL or a tool is easier than getting a better brain. So if I my goal was to gain skill, I took the right path.
The next part that I need to focus on are better hardware and better algorithms. Since I have my priority straight, assuming I can buy the Tenstorrent cards and the programming model checks out, I should just take the straightest route to getting them. That means getting back to Z. His 2.5k offer was shit, but if it means not having to waste time job hunting I can do it.
12:10pm. It is such a slight thing. Getting a job for the sake of programming. But back in 2021 even though it was my responsibility I was sick of it. I am going to get the feelings back and just go for it.
12:15pm. Done with breakfast.
There is nothing more pointless than making money when you don't intend to spend it on anything. Work is only meaningful when it contributes to your own power.
Let me just adjust one of the tests.
// Is the heap object ref counted correctly?
inl main () =
inl ~a = true
inl b = heap true
inl q =
if a then
join
inl _ = ()
join
inl _ = ()
join b, b
else
b, b
0i32
I added an extra b here. Let me put it through that mem checking C.
https://www.cee.studio/ DTS_REPORT_UNRELEASED_MEMORY=1 DTS_REPORT_ALL_MEMORY_SPACES=1 ./a.out
Yeah, it works. Figuring out the correct ref counting scheme yesterday was quite confusing for a bit, but my thoughts aligned and it became clear. Good work! Good work, me! It is wonderful when the session ends succesfuly.
12:25pm. Let me watch Oshimai and then I will do the chores. Then another anime ep, and I will touch up the article I had written. Maybe I'll just wipe the last section and do do it briefly.
https://buttondown.email/hillelwayne/archive/microfeatures-id-like-to-see-in-more-languages/
///
Most languages have multiline literals, but what makes the Lua version great is that the beginning and ending marks are different characters. This solves the infuriating “unnestable quotes” problem string literals have, and you don’t have to escape all your literal \s. My neovim has the string [[\]] to literally mean the string \. With escaping that’d be "\\" or something, ick.
///
Oh, this is an interesting idea. Maybe I should have done my own strings like this.
1:05pm. Done with Oshimai. Next are chores. Then I will redo the article and mail in to the PIM guy.
1:35pm. Done with chores. Let me work on the article a little. I just want rest. I do not want strenous mental exercise like the one yesterday.
What I should be doing is building up my will.
# Why This Is So Great
It might not seem impressive at first glance, but you have to consider what Spiral's competition is...
Let me add a bit more here.
2pm. I edited it a bit. Got rid of the schizo, and put in some corpo. Let me proof it.
2:05pm. Let me mail it to Juan.
2:20pm.
///
https://github.com/mrakgr/PIM-Programming-In-Spiral-UPMEM-Demo
I wrote this in hopes of making the UPMEM device programmer's life easier. I want to show this to you because your group is the only one I know that could be interested in such a project. These backends are easy to make for me, so if you are interested in programming in a high level functional programming language instead of C, and can get me access to at least a simulator for these kinds of chips, I'd be willing to do backends for them as well.
As an academic researcher, do you think this kind of work has any value from a research perspective?
///
Also I can't forget to update the prereqs to v2.3.7.
2:25pm. Okay, I am done with the article for good. v2.3.7 generates different looking code in while loops, so instead I'll just note what version of Spiral the article was compiled in.
2:30pm. Now that I have the just the right amount of corpo in the article, I am safe. Nobody is going to go back to read that UPMEM review. Potential sponsors won't be scared by it.
2:40pm. This is good enough for now. If I hadn't gotten a reply yesterday on the Tenstorrent doc request I'd be pushing this into their Discord right now, but since things are like this I will wait.
...No contact with the sales team yet.
I'll give this thread some time to play out. Maybe I'll get something by tomorrow. If I get mysteriously ghosted by an AI company again I'll stuff the article into the Tenstorrent discord and move on. Maybe I'll post it on HN to gauge the interest there. Afterwards it will serve as ammo on my resume.
I'll wait till the reply on Tuesday. If I do not get contact by then I'll move on.
Until then, I should take it easy and write Heaven's Key.
2:50pm. What do I feel like doing now. I want to watch the first ep of Spy Classroom, but I also watch to listen to the Alicesoft albums..
https://www.youtube.com/watch?v=hLUfSXR3Txc&list=PLQHmK8j6zHB7enzcqfxCdt4z5eI5PzXwp&index=2 Alice Sound Album Vol. 29 (2016)
Meh, maybe some good tracks will be on this.
Let me compromise. I'll write HK for a bit so I can listen to music. Maybe I can get some pages down.
5:55pm. 40.25k. I wrote about 1.9k. This is quite good. Nothing to complain about.
6pm. Let me close here. Time for games and anime. I'll take it easy over the weekend. There is no forcing this.
Maybe I'll get a reply by tomorrow, but I hadn't gotten anything yet from TT support.
For the next few days, let me just focus on writing."
[MIRROR] Drinking singulo ignores supermatter hallucinations and pulls nearby objects [MDB IGNORE] (#18157)
- Drinking singulo ignores supermatter hallucinations and pulls nearby objects (#71927)
Drinking a singulo will now:
- Give immunity to supermatter hallucinations
- Pulls objects to you based on the total volume in your system (20u = 1x1, 45u = 2x2, 80u = 3x3)
- Makes a burp and supermatter rays/sound when objects are pulled
The new ingredient is:
- Vokda 5u
- Wine 5u
- Liquid Dark Matter 1u (replaces Radium)
More cool effects for drinks. Singularity is all about gravity and the drink should have a theme around that.
🆑 add: Drinking singulo will now ignore supermatter hallucinations and pull objects to you balance: Change singulo drink recipe to require liquid dark matter instead of radium. /🆑
- Drinking singulo ignores supermatter hallucinations and pulls nearby objects
Co-authored-by: Tim [email protected]
[MIRROR] Fishing-themed Escape Shuttle [MDB IGNORE] (#18113)
- Fishing-themed Escape Shuttle (#71805)
I can't do much coding until you review my other PRs so I'm making a mapping PR instead. I actually made this a while ago while I was trying out strongDMM. It turns out: it's a good tool and easy to use.
This mid-tier shuttle isn't enormous and is shaped like a fish. It dedicates much of its internal space to an artificial fishing environment, plus fishing equipment storage. Plus look at that lovely wood panelling! There's not a lot of seating or a large medbay, but there's five fishing rods for people to wrestle each other over plus some aquariums to store your catches in.
It contains a variety of fishing biomes (ocean, moisture trap, hole, portal) but I couldn't fit "lava" in there even though I wanted to because it's hardcoded to only have fish in it on the mining z-level. If you're very lucky and nobody shoves you, the time between the shuttle docking at the station and arriving at Centcomm might be enough time for you to catch maybe four entire fish. Wow!
There are plenty of novelty shuttle options but I think this one is good for a personal touch of "the Captain would rather be fishing than hearing you complain about the nuclear operatives".
🆑 add: Tell your crew how much you care by ordering a shuttle where half of the seats have been removed so that you can get some angling done before you clock out. /🆑
- Fishing-themed Escape Shuttle
Co-authored-by: Jacquerel [email protected]
Knight Zombie Boss (#115)
-
I have added a new boss called KnightZombie. It has special properties to block the player's attack and can push them two blocks higher. I have also added custom drops to this mob. It includes Iron nuggets, experience bottles, and rotten flesh. They all have a chance to get to a random drops.
-
Update BloodNight-core/src/main/java/de/eldoria/bloodnight/specialmobs/mobs/zombie/KnightZombie.java
Co-authored-by: Lilly Tempest [email protected]
- Update BloodNight-core/src/main/java/de/eldoria/bloodnight/specialmobs/mobs/zombie/KnightZombie.java
Co-authored-by: Lilly Tempest [email protected]
-
Optimized and changed events.
-
Moved actions around to fit the corresponding events.
-
Fixed player velocity to get launched. The player.getVelocity does not work how I want to. I changed it to setVelocity as it functions how it should. Fixed timings for the events. Everything works as expected.
Co-authored-by: Lilly Tempest [email protected]
Hotkey tweaks (#7956)
-
yeah
-
changes the hotkey list
-
I forgot to push this
-
Revert "I forgot to push this"
This reverts commit 845878d1bda9f8be1cee214acd7329b0355a507b.
- Revert "changes the hotkey list"
This reverts commit a1174c47bdc49245e4b31ddb06f85e7fec21e51c.
-
re-adds reversions
-
Revert "yeah"
This reverts commit e61f425a1231c6049c123724bfe88a7e51b9c199.
- manually adds hotkeys instead of using .dmf
I love the quirky dream maker language
power: Introduce OnePlus 3 fingerprintd thaw hack
Taken from Oneplus 3, this hack will make fingerprintd recover from suspend quickly.
Small fixes for newer kernels since we're coming from 3.10.108..
Change-Id: I0166e82d51a07439d15b41dbc03d7e751bfa783b Co-authored-by: Cyber Knight [email protected] [cyberknight777: forwardport and adapt to 4.14] Signed-off-by: Shreyansh Lodha [email protected] Signed-off-by: Pierre2324 [email protected] Signed-off-by: PainKiller3 [email protected] Signed-off-by: Dhruv [email protected] Signed-off-by: Cyber Knight [email protected] Signed-off-by: Richard Raya [email protected]
AI actions won't unassign each other's movement targets & Mice stop being scared of people if fed cheese (#72130)
Fixes #72116
I've had a persistent issue with basic mob actions reporting this error
and think I finally cracked it
When replanning with AI_BEHAVIOR_CAN_PLAN_DURING_EXECUTION
it can run
Setup
on one action leading to the plan changing, meaning that it runs
finishCommand
to cancel all other existing commands
If you triggered a replan by setting up a movement action in the middle
of another movement action, cancelling the existing action would remove
the target already set by the current one.
We want actions to be able to remove their own movement target but not
if it has been changed by something else in the intervening time.
I fixed this by passing a source every time you set a movement target and adding a proc which only clears it if you are the source... but this feels kind of ugly. I couldn't think of anything but if you have a better idea let me know.
Also while I was doing this I turned it into a feature because I'm crazy. If you feed a mouse cheese by hand it will stop being scared of humans and so will any other mice it attracts from eating more cheese. This is mostly because I think industrial mouse farming to pass cargo bounties is funny. Mice controlled by a Regal Rat lose this behaviour and forget any past loyalties they may have had.
Mouse.Friend.-.Made.with.Clipchamp.mp4
Oh also I removed a block about cancelling if you have another target from the "hunt" behaviour, everywhere using this already achieves that simply by ordering the actions in expected priority order and it was messing with how I expected mice to work. Now if they happen to stop by some cheese they will correctly stop fleeing in order to eat it before continuing to run away.
Fixes a bug I kept running into. Makes it possible to set up a mouse farm without them screaming constantly. Lets people more easily domesticate mice to support Ratatouille gameplay.
🆑 add: Mice who are fed cheese by hand will accept humans as friends, at least until reminded otherwise by their rightful lord. fix: Fixed a runtime preventing mice from acting correctly when trying to flee and also eat cheese at the same time. /🆑
[GH-681] Correct hotkey override behaviors
Summary: Makes hotkeys work again while inside the Monaco editor.
There's a long comment in shortcuts.tsx
explaining why this was complicated, but to reiterate:
<HotKeys>
is normally the better component to use so that elements mounting/unmounting can all happily coexist and override each other's shortcuts depending on where the user's focus is- Monaco exists outside of React's lifecycle, and therefore outside of where
<HotKeys>
is watching <GlobalHotKeys>
listens outside of React's lifecycle, and can therefore notice events that happen there. We need this for Cmd+Enter to work while the editor is open.<GlobalHotKeys>
also has frustrating behaviors if you try to override a shortcut like you would with<HotKeys>
.- This set of changes lets us do both things (shortcuts work when Monaco is open, and shortcuts can be overridden) using an ugly hack.
- This hack is less ugly than any other solution I tried.
Fixes #681
Test Plan:
- Try shortcuts like Cmd+Enter, Cmd+K, and Cmd+E when the UI first loads. All should behave.
- Try Cmd+Enter while the Command Palette is open (and its Run button is active). It should do whatever the palette says.
- Try Cmd+Enter, Cmd+E, and others while the editor is open and you're actively editing the PxL script. These shortcuts should all continue to perform their usual actions.
- Try Cmd+Enter while both the editor and the command palette are open. The command palette should "win" since it's the one in the foreground, even if the run button is disabled (in that case, it should do nothing).
- After closing the editor with Cmd+E, shortcuts should continue to work without having to click anywhere to get them working again.
Reviewers: michelle, vihang, philkuz
Reviewed By: philkuz
Signed-off-by: Nick Lanam [email protected]
Differential Revision: https://phab.corp.pixielabs.ai/D12727
GitOrigin-RevId: 6b528498884299437f3c40aa8c62e330bdcf3b33
[MIRROR] Basic Mob Carp Bonus Part: Wall smashing [MDB IGNORE] (#17791)
- Basic Mob Carp Bonus Part: Wall smashing (#71524)
Atomisation of #71421 This moves the attack function of "environment smash" flags which allow simple mobs to attack walls into an element, so that we can put it on other things later. For some reason while working on carp I convinced myself that they had "environment_smash" flags, which they do not, so this actually is not relevant to carp in any way.
While implementing this I learned that the way wall smashing works is
stupid, because walls don't have health and so resultingly if a mob can
attack walls it deletes them in a single click. If we ever decide to
change this then it should be easier in an element than in three
different attack_animal
reactions.
This is especially silly with the "wumborian fugu" item which allows any
mob it is used on to instantly delete reinforced walls, and also to
destroy tables if they click them like seven or eight times (because it
does not increase their object damage in any way).
Eventually someone will port a basic mob which does use this behaviour (most of the mining ones for instance) and then this will be useful. If we ever rebalance wall smashing to not instantly delete walls then this will also be useful. Admins can apply this to a mob to allow it to delete walls if they wanted to do that for some reason, they probably shouldn't to be honest at least until after we've done point two unless they trust the player not to just use it to deconstruct the space station.
🆑 refactor: Moves wall smashing out of simple mob code and into an element we can reuse for basic mobs later /🆑
-
Basic Mob Carp Bonus Part: Wall smashing
-
SR mobs
Co-authored-by: Jacquerel [email protected] Co-authored-by: tastyfish [email protected]
In the example for pyproj it seems like you cannot first initialize the projections, but need to do it like this. Conversely, we could initialize the transformers, but then we'd need pairs per provider because we need to be able to transform to and from the given projection.
Some quick testing makes me think this is not that expensive though, but I haven't profiled it yet to really check how expensive it is to create a new Transformer every time.
This also means the Slovenia hack won't work for now, but that provider wasn't working anyway.
fix(error): Try to polish/clarify messages
In text communication you need to balance
- Scannability, putting the most important information upfront
- Brevity so people don't get lost in the message
- Softness to help ease people through a frustrating experience
I feel we weren't doing great on the first two points, so tried to iterate on the messages to improve them. I hope we aren't suffering too much on the third point as a side effect.