1,703,948 events, 994,766 push events, 1,342,952 commit messages, 75,277,352 characters
Third Problem ...
This one was also pretty simple. I saw one solution that although cool, and efficient was so unreadable it made no sense. It took me a while to figure out what the hell was happening and in the end, it felt kinda wrong in the sense that I didn't like how it was just used the idea of returning boolean "numbers" where 1 or greater being true, and 0 or lower being false. Is it correct, and efficient absolutely! But it feels kinda hacky to me. Either way the logic was straightforward find the maximum in the List, and if there exists an element inside the List, such that the sum of that element + extraCandies is still less than the max, then append(False), otherwise append(True) into the Array. This main issue here was syntax plain and simple!
Update thesis.tex
Hi,
I hope you and your family are safe/healthy during these crazy times.
I'm sorry if I am communicating in the wrong place/area, I'm having trouble understanding Github (I'm sorry!) I think I emailed you a week ago to your UCI email address but I figure that's not an email address you check regularly and this is kind of a timely issue.
I'm having an issue that I can't figure out and I'm hoping perhaps you may help:
I need to have an Introduction before Chapter 1, but it's not supposed to be a numbered chapter, yet it should still count as pages (including in the Table of Contents). I tried making a new file (calling it chapter0) and putting \chapter*{Introduction} at the top of the file and then in thesis.tex I put
\include{chapter0} \addcontentsline{toc}{chapter}{Introduction} \markboth{INTRODUCTION}{}
because I saw online that this worked for some, but this lead to bizarre results
I also want to remove List of Figures and List of Tables from the Table of Contents (and from the entire dissertation) since I won't have any in my final dissertation, but I don't see where I can remove them.
Do you think you can please help me with my 2 issues? If it is relevant, I use overleaf.com for generating LaTeX.
Thank you for your time!
Best Regards, Kam
please don't yell at me, I already know. Screw C++ lol, I'm going back to C. Please hear me out.
I've never been happy with my C++ code somehow. I always thought it was my fault. I never had this problem back when I used C# as my primary language, but I assumed that was just due to me changing a lot since then and having higher standards now.
Turns out it was just C++.
I've been using C almost exclusively for several months now and I've felt something I haven't felt while programming for a while now: happiness lol.
In C I'm in control of everything; there's no hidden nightmare behaviour that happens behind the scenes like is constantly the case in C++. Plus a lot of crap is just easier. Originally I ended up typing an entire rant here trying to explain what exactly I meant by that which was literally like 6 or 7 paragraphs long LOL and uhh, on second thought, nah I don't think anyone wants to read all of that.
I guess to summarize: I could rant about C++ for literal hours if I wanted to. I don't think it's a bad language, actually. In fact, despite everything I just said, I think it can be great if used correctly. But I've found that getting to that point where C++ becomes great requires a lot of dealing with the language's crap, and this generally tends to be a very time-consuming and stressful task in and of itself. I've found that I'm actually much happier not having to do any of that at all and being able to just write code and have it work well.
Believe it or not, C lets me do just that.
C has data. It has functions. That's about it lol. And that's great!! That's all I need. C doesn't make me constantly worry about a bunch of implicit complicated rules; I just write code and it does what it looks like it does.
Not to mention, it's ridiculously portable - even more so than C++ (I wasn't able to compile HedgeLib on macOS for a while, for example, due to some weird obscure difference with how the macOS version of clang handles template specialization that I'm not sure is even standard C++) - and it compiles much faster due to just being a much simpler language.
The only "good practices" anyone expects in C are as follows:
- Make it work.
- Make it safe.
- Make it clean.
- Make it fast.
If you do those no one really cares about the rest. This is in stark contrast to C++ where every little detail of everything you do has multiple ways it could be done, one of which is always considered to be better (even if it isn't), and if you don't do it exactly that way all the time, unless you have a very good reason (read: you literally have to do it differently), you're writing code that follows "bad practices". "Why are you using new?!! You have make_unique!! What's wrong with you??!??" "BRUH why'd you use a char* here?! USE std::string !! ARE YOU CRAZY?!" Stuff like this.
Basically out of all the projects I was working on, I found that it was only HedgeLib that was bringing me distress still. I didn't want to touch it because every time I did I felt like I was wasting my time trying to fix the unfixable mess of C++. I obviously didn't want to fricking rewrite stuff again, but given all of this, the only way it was going to really get better was if I did. I experimented with a pure C89 version of HedgeLib for a month or two. And dang, unlike with all the previous "rewrites", I actually felt happy again while writing it. I don't feel like I'm writing "meh code" all the time while working on this; I feel like I'm actually writing something solid that can last. This commit is what I have so far of that C89 version.
I'm still going to keep the branch name as "HedgeLib++" for now just because I don't know what else to call it lol and intend to finally get my act together and merge this into the master branch soon anyway.
I also haven't completely lost my mind: I'm still going to use C++ where it makes sense.
HedgeEdit, for example, will still be written in C++ because it's using Qt which I believe only supports C++. It's also doing Direct3D stuff which can actually be done from C... but it's a nightmare lol and not at all worth it. So yeah I'll be sticking with C++ for HedgeEdit. It's been removed temporarily from the repo though cuz it currently doesn't compile.
I'll be keeping all of the HedgeEdit code that I feel was good lol, like how I was handling various formats/games. The rendering backend, though, really did need some serious reworking. I wasn't even doing any sorting or anything and the architecture I was using didn't really allow for it. I've been working on a rendering engine as a side-project for a month or so now to teach me various things. I plan on using this engine in HedgeEdit once it's ready, which will be soon. Most of the architecture and such is already done, and it's much much better than what I had. It's also been designed in a forward-thinking way so I shouldn't ever have to rewrite it lol. For example: it's actually entirely designed around modern APIs such as d3d12 and vulkan, since I firmly believe this is where graphics APIs will be heading in the future. In spite of this, it translates shockingly well to older APIs such as d3d11 and GL, so once this is done, I'll be able to can simply write multiple backends and use whichever backend I feel is most appropriate for the target platform, all with relative ease.
I'm also finally going to be committing much more frequently from here on out. I need to yeet this mindset I have that committing to github == releasing a stable build. Obviously that's not fricking true and I know that, but it's hard to get it out of my head. Imma attempt to force myself to git push from now on so hopefully that'll work out lol.
To anyone who read all of this: Thanks a ton for dealing with my crap. I'm sorry for constantly doing this; I suppose you could say I've simply been in pursuit of happiness for a while. I feel as though I've finally actually found it, though. I plan to never rewrite this fricking library again from here on out and actually finish stuff. Hopefully I'll actually be successful this time.
[CODE] Step by step
Sick of this shit breaking. Every single step is going to be pushed. Fuck you unreal, but you're better than unity so fuck it.
Created Text For URL [www.vanguardngr.com/2020/08/moment-of-inspiration-forgiving-those-who-hurt-you-is-the-best-way-to-lead-a-fulfilling-life-video/]
did a bunch of shit and also fixed two FUCKING stupid bugs
I'm not working on kawa today
It's just not happening. My GPU decided to shit itself yesterday evening and I'm a bit tired, so I figured I won't make any progress even if I tried to, so today I'm just packaging a bit, preparing for school tomorrow and playing video games, work will be resumed the next time I have spare time
"9:40am. I am up. I feel very determined right now, even though it is not necessarily focused at programming.
https://blog.polybdenum.com/2020/07/04/subtype-inference-by-example-part-1-introducing-cubiml.html
Yesterday I found this on the PL sub.
Let me go through the series this morning. After that I will start.
I am reading this for the second time. Hmmm, I have no idea what to think about this?
11am. Forget this. I can't make a decision whether going deeper into this is worthwhile based on the info given in these posts. Also I find the Rust code hard to digest. I'd have to try rewriting this in F#, in order to understand what it does.
But I did look into this in the past and did come to a conclussion that the type system based on this would need a dynamic runtime. So I'll give this a pass.
Let me start programming.
and [<ReferenceEquality>] Var = {
scope : int
constraints : Constraint Set // Must be stated up front and needs to be static in forall vars
kind : TT // Is not supposed to have metavars.
name : string // Is what gets printed.
}
and [<ReferenceEquality>] MVar = {
mutable scope : int
mutable constraints : Constraint Set // Must be stated up front and needs to be static in forall vars
kind : TT // Has metavars, and so is mutable.
}
I'll master an unification based system and move on from that.
| RawForall(r,(_,(name,k)),b) ->
let rec typevar = function
| RawKindWildcard -> fresh_kind()
| RawKindStar -> KindType
| RawKindFun(a,b) -> KindFun(typevar a, typevar b)
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=name}
let body = fresh_var()
unify r s (TyForall(v,body))
term {env with ty = Map.add name (fresh_var'' v) env.ty} body b
Ah, I knew I was loose somewhere. Note the (fresh_var'' v)
on that last line. Here I am passing in a metavar where a regular var should do.
let rec typevar = function
| RawKindWildcard -> fresh_kind()
| RawKindStar -> KindType
| RawKindFun(a,b) -> KindFun(typevar a, typevar b)
This is a bad idea.
let rec typevar = function
| RawKindWildcard | RawKindStar -> KindType
| RawKindFun(a,b) -> KindFun(typevar a, typevar b)
The original one was right.
unify r s (TyForall(v,body))
This unify makes no sense.
Also I really should extend foralls so they have constraints attached to them, but let me leave that for later.
| RawForall(r,(_,(name,k)),b) -> failwith "TODO"
//let k = typevar k
//let v = {scope= !scope; constraints=Set.empty; kind=k; name=name}
//let body = fresh_var()
//unify r s (TyForall(v,body)) // TODO: This is wrong.
//term {env with ty = Map.add name (TyVar v) env.ty} body b
Let me just leave it like this.
| RawTForall(r,(_,(a,k)),b) ->
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
let x = fresh_var'' v
unify r s (TyForall(v, x))
ty {env with ty=Map.add a (TyVar v) env.ty} x b
Ah, here on the last line I am somehow doing the right thing.
let x = fresh_var'' v
But this thing is complete nonsense.
| RawTForall(r,(_,(a,k)),b) ->
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
let x = fresh_var ()
unify r s (TyForall(v, x))
ty {env with ty=Map.add a (TyVar v) env.ty} x b
I am doing things incorrectly here. Foralls need annotations.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (forall_subst_single (a,b),b')
This only makes sense when foralls are fully inferred.
| RawTForall(r,(_,(a,k)),b) ->
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
let x = fresh_var ()
ty {env with ty=Map.add a (TyVar v) env.ty} x b
unify r s (TyForall(v, x))
This makes me uncomfortable. Let me comment it out.
| RawTForall(r,(_,(a,k)),b) -> failwith "Compiler error: Needs special handling"
//let k = typevar k
//let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
//let x = fresh_var ()
//ty {env with ty=Map.add a (TyVar v) env.ty} x b
//unify r s (TyForall(v, x))
Let me do it like this for now.
// Note: Unifying these two only makes sense if they are fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (forall_subst_single (a,b),b')
I am also substituting the wrong side. It matters which one I am doing it to.
// Note: Unifying these two only makes sense if they are fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, forall_subst_single (a',b'))
This will be a tad more roboust, like when I am just using a failwith in a prototype instance.
11:40am.
let ho_make (i : int) (l : Var list) =
let h = TyHigherOrder(i,List.foldBack (fun (x : Var) s -> KindFun(x.kind,s)) l KindType)
let l' = List.map (fun (x : Var) -> x, fresh_subst_var x.constraints x.kind) l
List.fold (fun s (_,x) -> match tt s with KindFun(_,k) -> TyApply(s,x,k) | _ -> failwith "impossible") h l', l'
No, wait wait...
// Note: Unifying these two only makes sense if they are fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, forall_subst_single (a',b'))
11:45am.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') ->
let x = forall_subst_single (a',b')
loop (b, x)
Agh, damn.
I am going to have to make sure that the names are the same before I start. It won't work otherwise.
11:50am.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') ->
if a = a' then loop (b, forall_subst_single (a',b'))
else raise (TypeErrorException [r,ForallMetavarScopeError])
This makes no sense. I am not comparing by name here anymore.
...Ok, I have it. I will assume that expected is fully inferred in this case. Then...
// Note: Unifying these two only makes sense if the expected is fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, forall_subst_single (a,b'))
Yeah, this is the right way to do it. Previously, the unification would unify to unbound vars. Now things will work properly. The left side can afford to have metavars and have them inferred correctly. Ok.
12pm. Ok, good.
let on_fail () = errors.Add(r,ConstraintError con); fresh_var' (fresh_kind())
It bothers me how I am making a fresh kind here.
| (TyMetavar(a',link), b | b, TyMetavar(a',link)) ->
validate_unification a' b
unify_kind a'.kind (tt b)
Set.iter (fun con ->
let on_succ () = ()
let on_fail () = raise (TypeErrorException [r,ConstraintError(con, b)])
constraint_process b con on_succ on_fail
) a'.constraints
link := Some b
Let me just do this. No need to give all the erroneous constraints.
| (TyMetavar(a',link), b | b, TyMetavar(a',link)) ->
validate_unification a' b
unify_kind a'.kind (tt b)
let constraint_errors = ResizeArray()
Set.iter (fun con ->
let on_succ () = ()
let on_fail () = constraint_errors.Add(r,ConstraintError(con, b))
constraint_process b con on_succ on_fail
) a'.constraints
if 0 < constraint_errors.Count then raise (TypeErrorException (Seq.toList constraint_errors))
else link := Some b
...Actually, I want to. Then let me do this. I like this much more than creating a variable with an arbitrary kind.
12:20pm.
...Actually, rather than using a resizable array which will guarantee a heap object being created every time, let me use a list instead.
| (TyMetavar(a',link), b | b, TyMetavar(a',link)) ->
validate_unification a' b
unify_kind a'.kind (tt b)
Set.fold (fun ers con ->
let on_succ () = ers
let on_fail () = (r,ConstraintError(con, b)) :: ers
constraint_process b con on_succ on_fail
) [] a'.constraints
|> function
| [] -> link := Some b
| constraint_errors -> raise (TypeErrorException constraint_errors)
I like this a lot. Most of the time that empty list will be a null.
12:30pm. I've been taking a breather since the last entry.
let forall_subst_single (a,b) =
subst [a, fresh_var'' {scope= !scope; constraints=a.constraints; kind=a.kind}] b
Ah, no wait, not this. I need the regular subst.
// Note: Unifying these two only makes sense if the expected is fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, subst [a,TyVar a'] b')
How many mistakes have I made in thing single line so far. Way too many.
// Note: Unifying these two only makes sense if the expected is fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, subst [a',TyVar a] b')
Er, this way.
This is why you want to take it slowly and let ideas come to you.
12:40pm. At any rate, all is fine at the moment. The feedback I am getting right now is good.
12:45pm. Yeah, I think that at this point I am getting ready to finally tackle those let statements and finish the typechecker.
Let me have breakfast, and I will start work on the match statement case."
mm/page_alloc: use ac->high_zoneidx for classzone_idx
Patch series "integrate classzone_idx and high_zoneidx", v5.
This patchset is followup of the problem reported and discussed two years ago [1, 2]. The problem this patchset solves is related to the classzone_idx on the NUMA system. It causes a problem when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes.
This problem was reported two years ago, and, at that time, the solution got general agreements 2. But it was not upstreamed.
This patch (of 2):
Currently, we use classzone_idx to calculate lowmem reserve proetection for an allocation request. This classzone_idx causes a problem on NUMA systems when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes.
Before further explanation, I should first clarify how to compute the classzone_idx and the high_zoneidx.
-
ac->high_zoneidx is computed via the arcane gfp_zone(gfp_mask) and represents the index of the highest zone the allocation can use
-
classzone_idx was supposed to be the index of the highest zone on the local node that the allocation can use, that is actually available in the system
Think about following example. Node 0 has 4 populated zone, DMA/DMA32/NORMAL/MOVABLE. Node 1 has 1 populated zone, NORMAL. Some zones, such as MOVABLE, doesn't exist on node 1 and this makes following difference.
Assume that there is an allocation request whose gfp_zone(gfp_mask) is the zone, MOVABLE. Then, it's high_zoneidx is 3. If this allocation is initiated on node 0, it's classzone_idx is 3 since actually available/usable zone on local (node 0) is MOVABLE. If this allocation is initiated on node 1, it's classzone_idx is 2 since actually available/usable zone on local (node 1) is NORMAL.
You can see that classzone_idx of the allocation request are different according to their starting node, even if their high_zoneidx is the same.
Think more about these two allocation requests. If they are processed on local, there is no problem. However, if allocation is initiated on node 1 are processed on remote, in this example, at the NORMAL zone on node 0, due to memory shortage, problem occurs. Their different classzone_idx leads to different lowmem reserve and then different min watermark. See the following example.
root@ubuntu:/sys/devices/system/memory# cat /proc/zoneinfo Node 0, zone DMA per-node stats ... pages free 3965 min 5 low 8 high 11 spanned 4095 present 3998 managed 3977 protection: (0, 2961, 4928, 5440) ... Node 0, zone DMA32 pages free 757955 min 1129 low 1887 high 2645 spanned 1044480 present 782303 managed 758116 protection: (0, 0, 1967, 2479) ... Node 0, zone Normal pages free 459806 min 750 low 1253 high 1756 spanned 524288 present 524288 managed 503620 protection: (0, 0, 0, 4096) ... Node 0, zone Movable pages free 130759 min 195 low 326 high 457 spanned 1966079 present 131072 managed 131072 protection: (0, 0, 0, 0) ... Node 1, zone DMA pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone DMA32 pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone Normal per-node stats ... pages free 233277 min 383 low 640 high 897 spanned 262144 present 262144 managed 257744 protection: (0, 0, 0, 0) ... Node 1, zone Movable pages free 0 min 0 low 0 high 0 spanned 262144 present 0 managed 0 protection: (0, 0, 0, 0)
-
static min watermark for the NORMAL zone on node 0 is 750.
-
lowmem reserve for the request with classzone idx 3 at the NORMAL on node 0 is 4096.
-
lowmem reserve for the request with classzone idx 2 at the NORMAL on node 0 is 0.
So, overall min watermark is: allocation initiated on node 0 (classzone_idx 3): 750 + 4096 = 4846 allocation initiated on node 1 (classzone_idx 2): 750 + 0 = 750
Allocation initiated on node 1 will have some precedence than allocation initiated on node 0 because min watermark of the former allocation is lower than the other. So, allocation initiated on node 1 could succeed on node 0 when allocation initiated on node 0 could not, and, this could cause too many numa_miss allocation. Then, performance could be downgraded.
Recently, there was a regression report about this problem on CMA patches since CMA memory are placed in ZONE_MOVABLE by those patches. I checked that problem is disappeared with this fix that uses high_zoneidx for classzone_idx.
http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop
Using high_zoneidx for classzone_idx is more consistent way than previous approach because system's memory layout doesn't affect anything to it. With this patch, both classzone_idx on above example will be 3 so will have the same min watermark.
allocation initiated on node 0: 750 + 4096 = 4846 allocation initiated on node 1: 750 + 4096 = 4846
One could wonder if there is a side effect that allocation initiated on node 1 will use higher bar when allocation is handled on local since classzone_idx could be higher than before. It will not happen because the zone without managed page doesn't contributes lowmem_reserve at all.
Reported-by: Ye Xiaolong [email protected] Signed-off-by: Joonsoo Kim [email protected] Signed-off-by: Andrew Morton [email protected] Tested-by: Ye Xiaolong [email protected] Reviewed-by: Baoquan He [email protected] Acked-by: Vlastimil Babka [email protected] Acked-by: David Rientjes [email protected] Cc: Johannes Weiner [email protected] Cc: Michal Hocko [email protected] Cc: Minchan Kim [email protected] Cc: Mel Gorman [email protected] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: celtare21 [email protected]
projectloadtask: move the project to the main thread, not the task
How did this get overlooked for so damn long holy shit dude
AssimpImporter: port internals to the new MaterialData.
At first I thought I would add PBR support as well, but GOD DAMN there's so many bugs that I'm just chucking all that effort into a write-only branch with no intention of ever looking at it again. FFS.
This thus exposes only the attributes that were available before, which means we at least get to keep the same bug zoo. Tests not updated yet to verify we're indeed the same level of shitty. Well... for some reason we now correctly get texture coordinate sets imported. Not sure what was up before. Also since we now unconditionally import all textures by iterating the material instead of a bunch of whitelisted keys, the order is different.
Allow teleportation into unteleportable spots in wizard mode
This was something that really annoyed me. The wizard-mode character should be able to do things such as teleporting directly into the Wizard's Tower without having to go through the magic portal, or have some way of entering solid rock without turning into a xorn.
There is now a prompt to confirm that you want to teleport somewhere you couldn't normally. If you refuse you'll still get "Sorry..." like normal; otherwise there's no restriction and you'll teleport there.
Note that you can teleport on top of monsters in wizard mode with this now, because this doesn't cause any bugs, it just displaces the monster to the closest available square.
Holy fucking shit I hate python duck typing so much
"2:15pm. One more chapter and I will start.
2:20pm. Let me start. Today it is especially difficult for me to resume because the cute ancestor novel has been boring me to death for a long time now, and I am stopping just as it is getting interesting.
Guh.
2:25pm. Focus me, focus. Let statements should not be this big of deal, but for some reason that is how things turned out.
//| RawMatch of Range * body: RawExpr * (Pattern * RawExpr) list
//| RawRecBlock of Range * ((Range * VarString) * RawExpr) list * on_succ: RawExpr
2:45pm. Had to take a short break.
| RawMatch(r,body,l) ->
let rec foralls_get = function
| RawForall(_,a,b) -> let a', b = foralls_get b in a :: a', b
| b -> [], b
I managed to get this far.
let vars, body = foralls_get body
Now I have this.
What about the patterns? Ah, unlike the other two it return an Env. Ok.
2:50pm.
| RawMatch(r,body,l) ->
let rec foralls_get = function
| RawForall(_,a,b) -> let a', b = foralls_get b in a :: a', b
| b -> [], b
let vars, body = foralls_get body
let body_var = fresh_var()
let x = List.foldBack (fun x s -> TyForall() ) vars body_var
()
I am doing it. I may be getting confused by I am doing it.
2:55pm. Ok, I need to be reasonable here. I need to start by TCing what relies on nothing else - the patterns.
3:05pm.
| RawMatch(_,body,l) ->
let pat = fresh_var()
let l = List.map (fun (a,b) -> pattern env pat a, b) l
This is right.
let rec foralls_get = function
| RawForall(_,a,b) -> let a', b = foralls_get b in a :: a', b
| b -> [], b
let vars, body = foralls_get body
let body_var = fresh_var()
let x = List.foldBack (fun x s -> TyForall() ) vars body_var
Now what do I do about this?
Let me get rid of this stuff. I think that code that I had for RawForall
is the right fit instead here. foralls_get
I will need for the recursive case, but not here.
3:15pm.
| RawMatch(_,body,l) ->
let pat = fresh_var()
let l = List.map (fun (a,b) -> pattern env pat a, b) l
let raw_forall s (r, a, k, b) =
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
let x = fresh_var ()
unify r s (TyForall(v, x))
ty {env with ty=Map.add a (TyVar v) env.ty} x b
()
Am I dying here? Yes, I am. Forget this.
Let me match the case that needs to be let generalized and I'll separate it from the case that does not.
3:20pm.
| RawMatch(_,body,l) ->
match l, body with
| [PatVar(_,name), on_succ], (RawForall _ | RawFun _) ->
First since this migth be let generalized, let me increment the scope.
let l = List.map (fun (a,b) -> pattern env pat a, b) l
let raw_forall s (r, a, k, b) =
let k = typevar k
let v = {scope= !scope; constraints=Set.empty; kind=k; name=a}
let x = fresh_var ()
unify r s (TyForall(v, x))
ty {env with ty=Map.add a (TyVar v) env.ty} x b
Let me back this up here.
3:30pm. Ok, I need to step some time from the screen. I am not focusing on this at all. I need to figure out how exactly do I want to deal with foralls.
3:45pm. I am back just for a bit. I realized I missed something important.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') -> loop (b, subst [a',TyVar a] b')
It is fucking here again. I need to unify the kinds of a'
and a
now. I forgot that since I substituting one different sides that this is necessary.
// Note: Unifying these two only makes sense if the expected is fully inferred already.
| TyForall(a,b), TyForall(a',b') | TyInl(a,b), TyInl(a',b') when a.kind = a'.kind -> loop (b, subst [a',TyVar a] b')
I do not need to do an actual unification, since Var
s do not have metavars in them. Just an equality check will suffice.
I actually had this check at the start, but since I was just substituting vars for metavars, I convinced myself that it was not necessary. And once I changed to this, I forgot to think about it.
3:50pm. Let me get back to elaborating how I should take care of the foralls. I just need to warm up to this first in order to build the motivation.
Yes, I had forever think about this, but during that time I only generated a lot of ideas. I did not actually settle on an approach. That is what I have to do now before I start.
4:45pm. Had some time to meditate on this. Let me do it.
let rec foralls_get = function
| RawForall(_,a,b) -> let a', b = foralls_get b in a :: a', b
| b -> [], b
Let me factor this out to the top level. Then.
name : string // Is what gets printed.
Even though this is a forall var, let me make it mutable. Since this field does not affect compilation, let me just do it.
mutable name : string // Is what gets printed.
This will come into play later.
Now...
4:55pm.
| RawMatch(r,body,l) ->
match l, body with
| [PatVar(_,name), on_succ], (RawForall _ | RawFun _) ->
incr scope
let vars,body = foralls_get body
let typevar_to_var (x : TypeVar) : Var = failwith ""
let generalize vars body_var = failwith ""
let vars = List.map typevar_to_var vars
let body_var = fresh_var()
term {env with ty = List.fold (fun s x -> Map.add x.name (TyVar x) s) env.ty vars} body_var body
unify r s (generalize vars body_var)
decr scope
Yeah, this is basically it. But I also need to fill in those two empties.
let typevar_to_var ((r,(name,kind)) : TypeVar) : Var = {scope= !scope; constraints=Set.empty; kind=typevar kind; name=name}
The first one is easy. Later on I will have to deal with constraints, but for now all that is empty.
5:05pm.
let generalize (forall_vars : Var list) (body : T) =
let scope = !scope
Ok, how do I do this.
...Let me put all the metavars into a resizeable array.
let rec replace_metavars x =
let f = replace_metavars
match x with
| TyB | TyPrim _ | TySymbol _ | TyHigherOrder _ | TyConstraint _ -> x
| TyPair(a,b) -> TyPair(f a, f b)
| TyRecord l -> TyRecord(Map.map (fun _ -> f) l)
| TyFun(a,b) -> TyFun(f a, f b)
| TyArray a -> TyArray(f a)
| TyApply(a,b,c) -> TyApply(f a, f b, c)
| TyInl
| TyForall of Var * T
| TyMetavar of MVar * T option ref
| TyVar of Var
No I am doing it wrong.
5:25pm.
let generalize (forall_vars : Var list) (body : T) =
let scope = !scope
let new_foralls = ResizeArray()
let rec replace_metavars x =
let f = replace_metavars
match x with
| TyMetavar(_,{contents=Some x} & link) -> go x link f
| TyMetavar(x, link) when scope = x.scope ->
let x = {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=null}
new_foralls.Add(x)
let v = TyVar x
link := Some v
v
| TyMetavar _ | TyConstraint _ | TyVar _ | TyHigherOrder _ | TyB | TyPrim _ | TySymbol _ as x -> x
| TyPair(a,b) -> TyPair(f a, f b)
| TyRecord l -> TyRecord(Map.map (fun _ -> f) l)
| TyFun(a,b) -> TyFun(f a, f b)
| TyForall(a,b) -> TyForall(a,f b)
| TyArray a -> TyArray(f a)
| TyApply(a,b,c) -> TyApply(f a, f b, c)
| TyInl(a,b) -> TyInl(a,f b)
Actually, let me just go with this. Though what I am doing is redundant in some ways as I am duplicating the functionality of term_subst
, I should embrace the duplication in the name of efficiency.
5:30pm.
let generalize (forall_vars : Var list) (body : T) =
let scope = !scope
let generalized_metavars = ResizeArray()
let rec replace_metavars x =
let f = replace_metavars
match x with
| TyMetavar(_,{contents=Some x} & link) -> go x link f
| TyMetavar(x, link) when scope = x.scope ->
let x = {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=null}
generalized_metavars.Add(x)
let v = TyVar x
link := Some v
v
| TyMetavar _ | TyConstraint _ | TyVar _ | TyHigherOrder _ | TyB | TyPrim _ | TySymbol _ as x -> x
| TyPair(a,b) -> TyPair(f a, f b)
| TyRecord l -> TyRecord(Map.map (fun _ -> f) l)
| TyFun(a,b) -> TyFun(f a, f b)
| TyForall(a,b) -> TyForall(a,f b)
| TyArray a -> TyArray(f a)
| TyApply(a,b,c) -> TyApply(f a, f b, c)
| TyInl(a,b) -> TyInl(a,f b)
let f x s = TyForall(x,s)
Seq.foldBack f generalized_metavars body
|> List.foldBack f forall_vars
Beautiful. Now let me implement kind_force
.
5:30pm.
let rec kind_force = function
| KindMetavar ({contents'=Some x} & link) -> go' x link kind_subst
| KindMetavar link -> let x = KindType in link.contents' <- Some x; x
| KindConstraint | KindType as x -> x
| KindFun(a,b) -> KindFun(kind_subst a,kind_subst b)
Let me go with this.
...Actually, it does not seem like I really have to rebuild the kinds here. I can just relink them normally, but then I would not be able to eliminate the leftover metavars. Forget that. This is fine. Most of the time, the kinds will be very small anyway.
| RawMatch(r,body,l) ->
match l, body with
| [PatVar(_,name), on_succ], (RawForall _ | RawFun _) ->
incr scope
let vars,body = foralls_get body
let vars = List.map typevar_to_var vars
let body_var = fresh_var()
term {env with ty = List.fold (fun s x -> Map.add x.name (TyVar x) s) env.ty vars} body_var body
decr scope
term {env with term = Map.add name (generalize vars body_var) env.term } s on_succ
Hmmmm...Yeah, this is great.
Now, let me take care of the regular case.
| RawMatch(r,(RawForall _ | RawFun _) & body,[PatVar(_,name), on_succ]) ->
incr scope
let vars,body = foralls_get body
let vars = List.map typevar_to_var vars
let body_var = fresh_var()
term {env with ty = List.fold (fun s x -> Map.add x.name (TyVar x) s) env.ty vars} body_var body
decr scope
term {env with term = Map.add name (generalize vars body_var) env.term } s on_succ
| RawMatch(r,body,l) ->
Let me organize it like so.
6pm.
| RawMatch(_,body,l) ->
let body_var = fresh_var()
let l = List.map (fun (a,on_succ) -> pattern env body_var a, on_succ) l
f body_var body
List.iter (fun (env,on_succ) -> term env s on_succ) l
Here is the code for the general match case.
Now what is next?
//| RawRecBlock of Range * ((Range * VarString) * RawExpr) list * on_succ: RawExpr
First of, let me take care of the case where none of the functions in the block have foralls.
6:20pm.
term {env with term = List.fold (fun term ((a,v), _) -> Map.add a (generalize [] v) term) env.term l} s on_succ
Ah, crap. I did not realize I have a serious problem with generalize in recursive blocks.
| TyMetavar(x, link) when scope = x.scope ->
let x = {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=null}
generalized_metavars.Add(x)
let v = TyVar x
link := Some v
v
I think that doing this like here was a mistake after all.
| TyMetavar(x, link) when scope = x.scope ->
let x = {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=null}
generalized_metavars.Add(x)
TyVar x
Let me do it like this.
6:25pm.
| RawRecBlock(_,l,on_succ) ->
incr scope
let l, has_forall =
List.mapFold (fun has_forall ((_,a),b) ->
let vars, body = foralls_get b
(a, vars, body), has_forall || (List.isEmpty vars = false)
) false l
if has_forall then failwith "TODO"
else
let l, m = List.mapFold (fun term (a,_,b) ->
let v = fresh_var()
((a, v), b), Map.add a v term) env.term l
let _ =
let env = {env with term=m}
List.iter (fun ((_,v),b) -> term env v b) l
term {env with term = List.fold (fun term ((a,v), _) -> Map.add a (generalize [] v) term) env.term l} s on_succ
decr scope
Now I only need to take care of the has_forall
case.
What makes this one troublesome is that I need to grab the annotations and typecheck them separately.
6:30pm. I am thinking about this. Maybe I will leave this for tomorrow.
...Actually, let me sketch it out. I'll bring in annotations_get
, but I won't implement it just yet.
6:45pm. Let me stop here. It is lunch time."
Creates Dynamic Meal Plan URL
I wanted to make it easier to add new meal plans and view past meal plans without having to change a hardcoded value to pick which plan to show.
This commit fixes the issue by using Next.js's dynamic routing to create a page that loads up the desired year plan using the year, month, and date params from the route.
A simple index page is also added that reads through the Meal Plans directory to automatically create links for the existing meal plans.
Updates old meal plans that still used meal as a key instead of the meal name (i.e. breakfast, lunch, dinner).
A whole bunch of updates:
- Added Taiping Kingdom, a formable empire for Xinjiao -Added The God Worshipping Society, religious head title for Xinjiao -Added new names for Xigaoshan culture -Added custom CoA for Liu dynasty in Lhasa -Added some Xigaoshan localizations to Tibet, still work in progress -Added Red Turbans holy order for Xinjiao -Added Fifth Banner, a Xinjiao merc -Changed the CoAs of Xinjiao to the Chinese style, ALSO UPDATED IT FOR CHRISTIANS SO KEEP THAT -And some other stuff you'll see
Redo the Rothenburg House with better UV
After texturing the Rothenburg House, I found a litany of errors and issues. Skewed textures, reversed textures, upside down textures. It just looked terrible!
So I went back to the drawing board on the Rothenburg House. I raised the foundation, removed the windows, and re-did the UV Mapping so that it doesn't look like an absolute mess. To replace the windows, I added some flat window sprites - those ended up looking pretty good.
As part of the overhaul, I switched out our color pallette. We were previously using the 16 color Dawnbringer pallette, but I though that it was too 'cool' and blue-hued for my tests. There wasn't an expressive enough range of the colors I needed (brown, in this instance) and the colors I did have I straight-up didn't like. Browsing the pallette on LOSPEC.com, I found a much warmer pallette called Fantasy 24 by Gabriel C. Turns out, the 8 extra colors were just what I needed! The pallette looks and feels great; plus it comes with a helpful guide for the various levels of shading.
I'll admit - modeling this house has been really frustrating. When I used to make maps for Left 4 Dead 2, things were so much easier. All you needed to know was how to replicate the shape of something, and the UV-mapping nonsense was handled for you. In the Hammer Editor, anything could be made out of blocks if you just broke it down appropriately. Finely detailed models were for physics items and fine details, not broad structural elements. But Godot's pipeline sort of FORCES you to use Blender, which is so finicky and artisnal and has 5000 options to cover the two things that I need.
I do make these shapes using OpenSCAD - as much as I don't want to, it might be worth it to sit down and program these custom meshes in by hand. Maybe that would make our asset pipeline less of a nightmare...
Updating: 8/16/2020 11:00:00 PM
- Added: labuladong/fucking-algorithm (https://github.com/labuladong/fucking-algorithm/tree/english)
- Added: » The ergonomic mouse that saved my wrist by James Eftegarie (https://eftegarie.com/the-ergonomic-mouse-that-saved-my-wrist/)
- Added: Focus on what you can control (https://www.preetamnath.com/blog/focus-on-what-you-can-control)
- Added: Code Smell: Concrete Abstraction (https://matklad.github.io/2020/08/15/concrete-abstraction.html)
- Added: The “Easiest” Paths to Product Management (https://reeve.blog/blog/paths-to-product-management/)
- Added: Delete Your Social Media 📱 — Brendan Cahill (https://brendancahill.io/resources/delete)
- Added: Factorio and Software Engineering · Krishna's words (https://blog.nindalf.com/posts/factorio-and-software-engineering/)
- Added: Digital Sight Management, and the Mystery of the Missing Amazon Receipts (https://mssv.net/2020/08/16/digital-sight-management-and-the-mystery-of-the-missing-amazon-receipts/)
- Added: Using Kibana to Debug Production Issues | Preslav Mihaylov (https://pmihaylov.com/kibana-debugging-tutorial/)
Generation took: 00:06:31.8981944