Skip to content

Latest commit

 

History

History
554 lines (357 loc) · 37.7 KB

2020-08-04.md

File metadata and controls

554 lines (357 loc) · 37.7 KB

< 2020-08-04 >

2,477,503 events, 1,239,823 push events, 1,970,408 commit messages, 152,615,581 characters

Tuesday 2020-08-04 00:20:27 by Brandon Wong

Fuck pygame for the songbuilder. we will use Qt and pygame because fuck you


Tuesday 2020-08-04 02:59:17 by Nathan VanBenschoten

[WIP] rfc: Consistent Read Replicas

Consistent Read Replicas provide a mechanism through which follower replicas in a Range can be used to serve reads for non-stale read-only and read-write transactions.

The ability to serve reads from follower replicas is beneficial both because it can reduce wide-area network jumps in geo-distributed deployments and because it can serve as a form of load-balancing for concentrated read traffic. It may also provide an avenue to reduce tail latencies in read-heavy workloads, although such a benefit is not a focus of this RFC.

_The purpose of this RFC is to introduce an approach to consistent read replicas that I think could be implemented in CockroachDB in the medium-term future. It takes inspiration from Andy Kimball and the AcidLib project. I'm hoping for this to spur a discussion about potential solutions to the problem and also generally about our ambitions in this area going forward.

The most salient design decision from this proposal is that it remains completely separate from the Raft consensus protocol. It does not interact with Raft, instead operating in the lease index domain. This is similar in spirit to our approach to implementing closed timestamps.

The RFC includes three alternative approaches that each address part of these issues, but come with their own challenges and costs. Unlike with most RFCs, I think it's very possible that we end up preferring one of these alternatives over the main proposal in the RFC._

Release note: None


Tuesday 2020-08-04 03:24:37 by Sharmayu Mirica

time in announce fixed

i can't edit this in vscode (fuck vscodium I'm already putting code in the github botnet what's the difference like fuck all those "muh freedom" software what the fuck are you talking about privacy when you're using a windows OS, also fuck linuxtards I swear you retards think your OS is relevant when the truth is only self-important basement dwellers use rolling shitdates like fuck that's so retarded you archtards think that makes you so smart no that just makes you irrelevant and without a real job don't get me started on ubuntu it's just amazing how you can be ABSOLUTELY RETARDED by using that like fuck if you're using ubuntu might as well just stick to windows you cum-guzzling shittoid or at least use a linux distro that isn't braindead retarded, so you can be just normally retarded like the rest of the linuxbabbies, fuck you /g/ fuck you so much) so I edited this directly on github, sorry for the multiple autism commits


Tuesday 2020-08-04 06:00:00 by quietly-turning

clarify inline comments

What am I doing with my life? Why does everything hurt so much? Why can't I act on any single idea I have? Am I lost to rumination forever? I could help people; I could switch jobs; I could move to a different city; I could find new hobbies; I could go on dates; I could find new friends; why is my life this empty, numb loop? Why do I keep going? When will the sea be still? Is it even possible? Can I do anything without fucking it up? Am I even capable of social interaction any longer? Excuse me, sir, will you walk with me to the bus stop?

The bus stop?

Yes, the bus stop by Starbucks.

The bus stop by Starbucks?

I've been trying to rely less on garbage cans, but I still have trouble stepping off the curb.

I. Yes. That's no problem. Let's go to the bus stop that's this way.

Thank you. Are you coming from work?

Not really. I'm walking to Rite Aid to buy cookies.

That's good. We're going the same way. You can go there after we get to Starbucks. Starbucks is just there, a little bit further, then you can get your cookie.

Mhm. Where are you off to today?

Squirrel Hill.

That's a nice town. I like Squirrel Hill. There are lots of good restaurants where you can get food with friends.

I like it, too. It sure is good we were going the same way today. Most people walk right by me. They can't hear because they have earphones.

Oh, I do that a lot too.

It's good you didn't today.


Tuesday 2020-08-04 06:48:39 by Roman Kuksin

adjust configuration basing on subjective experience

It seems that testing datasets differ dramaticaly from real usage. Generated configuration is visibly awful. It is way to spacious so you should repeat and repeat the same correction dozens of times before the algorithm adjusts. Besides, the new configuration performs only a little bit worse on tests than the 'optimal' one. And it is much better subjectively. Science sucks!


Tuesday 2020-08-04 07:25:58 by Casey Pak

Added Functions Panel and restructured program

With custom made buttons :D Also made program unresizable so I can abuse null layout because all the other Swing layout is pure fucking cancer holy shit who the fuck designed those


Tuesday 2020-08-04 10:21:19 by Marko Grdinić

"10:45am. Both yesterday and this morning there were bouts of bad weather that I had to deal with by staying in bed.

Anyway, that back and forth in the PL monthly thread with /u/--comedian-- seems to be done. The sticking points to him were Julia and spiking NNs. Well, as to the later, a reasonable assumption is that they will still be computers. As long as that holds, I will be able to control them. Like, I do not think they will have restrictions just for the sake of being restricted, and internally they should be von Neumann using spikes for inter core communication.

Though, I haven't looked into what communication with 'spikes' really entails. Are they strictly binary, or something else? Well, even if they are binary, you can still group them and use that to transmit floats. In the worst case, I might have to implement a messaging library like ZeroMQ specifically for spiking NNs.

As plan B, if spiking NNs turn out to be too hard to handle is to point out that many of the devices will not be spiking NNs, and will just be accelerators with on-core memory.

11:10am. Anyway, I have plenty of potential roads to go down.

11:15am. One thing that really bothers me right now are potential compile times. Again with the prototype instances. Will I really be ok?

If Spiral becomes popular, this will be an issue. I am afraid of this. I resolved GADTs, but the current way of doing prototypes very much necessitates recompiling everything from scratch every time.

11:25am. Wait, I figured it out!

Weeks ago I remember wracking my brain over the same problem, but I ended up concludding that for every type I'd also have to pass the instances explicitly. This is a prerequisite for doing that kind of implicit analysis and it would be way too slow. This is not acceptable solution.

Right now, nominals are just ids in the typechecker.

But suppose I put ids and instances together and then hash consed them to get back a single id once again?

That would work and would be fast. That would allow me to keep join points persistent between compilation runs.

This is an elegant way of implementing a very simple idea of changing the type id when some of its nominal instances change as well.

Memoization, is there anything it can't do?

11:30am. This is the holy grail of making the partial evaluator performant even in the presence of things like typeclasses, and I just found it.

11:35am. Somewhat ironically, I could not tap into this improvement if Spiral was still a command line tool like in the v0.09 era. It is only because I am doing editor support that the language compiler is a server rather than a one-off function. It is only thanks to it being a server, that it can afford to have memory between compilation runs.

11:50am. Where did the last 15m go?

At any rate, yesterday I finalized the type system design, and just now I have everything I need to make the overall design complete from a performance standpoint.

It is fear that spurned this on. Yesterday on HN I saw a Julia thread and people were complaining that individual packages sometimes take dozen of second or more to compile. This is the one thing I want to avoid most of all in Spiral, but I could not figure out to do until now.

Now, I do not think Spiral will be anywhere as slow as Julia in overall compilation, but redoing everything from scratch every time would have been a large overhead regardless of the situation. I do want to do the most I can to make the join point information between runs persistent.

12pm. Let me take a break here. There wasn't time to do any programming in the morning anyway, and I made the ideal use of this time to resolve another design issue.

12:05pm. What I will try to do from here is hone in on my desire to do type inference for terms. Maybe I'll have to spend another day in bed to do this.

I know that I did a lot of this yesterday, but rather than motivating myself to start the term level, I struggled with the issues surrounding GADTs. And it is true that I spent a lot of time brainstorming how to do unification. I was really motivated to figure out how to do all aspects of it and now I have that, but I do not have the strong urge of wanting to do it yet.

As this morning showed, rather than being motivated to start coding, I am still thinking other things. I want to take some time to think whether I really have everything to start dealing with inl statements. If I do, there will be nothing stoping me from finishing the language.

It is much easier to make progress when you have a clear road in front of you instead of curves at every turn. Have I straightened it out enough in my mind?"


Tuesday 2020-08-04 11:41:31 by Petter Knudsen

Merge pull request #2 from Styreportalen-AS/feature/unfuckifyThisGarbageMaybe

idnu help god fuck shit stack


Tuesday 2020-08-04 13:39:38 by Tanner

pdPSD: add support for reading (and writing!) layer groups

Relates to #305. Thank you to @Fivavoa for his continued feedback on this feature.

PhotoDemon doesn't support layer groups (yet). This makes it difficult to use layer group data stored in PSD files.

As a workaround, I've updated PhotoDemon's PSD engine so that it now creates blank "dummy" layers for PSD layer groups. These 1x1 px transparent layers require minimal overhead (unlike, say, Paint.NET where layers must always be the full size of the image) and they give the user a way to preserve layer groups when loading/saving PSD data - and even better, users can actually create/rearrange/delete those dummy layer groups, and their changes can be saved back out to PSD using Photoshop's original group encoding strategy!

This means that in a way, PD does actually support "layer groups".... albeit with an extremely crappy UI. :D In truth, though, this is a large improvement over other free editors. For example, Paint.NET's (sorry to pick on them again) PSD plugin supports importing layer groups as "dummy" layers, but when saving an image back out to PSD, layer groups get saved as full-sized transparent dummy layers, without any group data attached! It's not a good arrangement, as the resulting PSD is now much larger, doesn't contain groups, and is totally different from its original version.

Anyway, here's the catch to all this - I have tested these new encoder/decoder features extensively, but only on 3rd-party PSD engines. (GIMP in particular has been extremely helpful for checking layer group behavior.) I don't own a copy of Photoshop, so I need outside help verifying that my current approach works okay with PS.

If you encounter any bugs or surprises, as always, please let me know.

(Also, this unfortunately breaks my strict rule of "no new translation text during a beta", but I hope this feature's usefulness makes up for it. I am so sorry to PD's translators for modifying language files this close to release!)


Tuesday 2020-08-04 14:06:43 by George Spelvin

lib/sort: make swap functions more generic

Patch series "lib/sort & lib/list_sort: faster and smaller", v2.

Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions.

The first three patches apply to lib/sort.c.

Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned multiples of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.)

Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch.

Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated.

lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort.

Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size).

Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge.

There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8477f with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes.

The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm.

TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy.

I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing.

This patch (of 5):

Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop.

Despite what ca96ab859ab4 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.)

The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series.

x86-64 code size 872 -> 886 bytes (+14)

With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven.

Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Rasmus Villemoes [email protected] Cc: Geert Uytterhoeven [email protected] Cc: Daniel Wagner [email protected] Cc: Don Mullis [email protected] Cc: Dave Chinner [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Dede Dindin Qudsy [email protected] Signed-off-by: Vaisakh Murali [email protected]


Tuesday 2020-08-04 15:55:09 by David Laban

merge doesn't work on shallow clones

This is annoying, because you'd think that git would be able to work out that I want to pull, and therefore it should deepen its clones of both branches until they either have a common ancestor or are proven to really not share a root. Maybe it's not that simple.


Tuesday 2020-08-04 19:12:56 by Venarir

holy fuck ing shit

m60 for murky dozer!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! im suffering!!!!!!!!!!!!!!!!!!!


Tuesday 2020-08-04 19:47:48 by Marko Grdinić

"1:40pm. Let me just slack for a bit more and then I'll go spend some time in bed to think about what I want to program next. It feels like the more experience I get as a programmer the more time I keep taking away from the screen to just think about programming.

2:10pm. Enough slacking, time for bed. I am going to let my thoughts wander and try to hone in on the path of getting the terms type inferred. I'll try to gether the motivation to do it. Then I will just do it. Without hesitation and without turning back.

I will finish Spiral v0.2. While v0.09 was amateurish, I have enough knowledge and experience and skill to make v0.2 trully professional quality.

7:20pm. It is late and my soul is burning.

I do not feel like starting just yet, but I want to get a few things off my chest.

I feel like I do not want to lose again. Through life you have to go through all sorts of disrespect, and such scenarios can only be avoided by having power. Since I feel this kind of silent fury, let me talk about one of my regrets.

I guess that back and forth on the PL thread had something to do with it.

Going back to 2018, there is one thing I really regret during the making of the ML library - not being able to resolve GPU memory management. I just could not help it. .NET GC only manages regular memory, and unlike Python it does not use reference counting which would free as soon as it went out of scope. So I had to do it manually.

There are plenty of things I regret during that time that were the fault of Spiral itself, but this one was the fault of the platform itself.

For doing a ML library, .NET was the wrong choice.

Ironically, I think for neurochips, it will be the right choice as those chips won't use the same model.

Maybe I should have explained to /u/--comedian-- the basic properties of pretty much all the new wave of hardware.

They are going to have internal memory. That means no pointer management in host code like GPUs. Instead all communication is going to be done via message passing. This applies both to CPU-neurochip, and inter-neurochip communication. This is actually going to make the ML library design easier as I will only have to think about serialization, and Spiral is the best possible tool imaginable for that kind of task.

The way imagine it, a neurochip is like a personal supercomputing cluster. Instead of a single CPU that accesses the main memory, it is more like communication over the Internet. One of the reasons why I put so much effort into the ZeroMQ guide is preparation for doing programming in this new regime. Rx, and Hopac as well. The more concurrency I have under my thumb before I start, the better off I will be when I get my hands on one of those chips.

7:30pm. Them being called 'neuro' and whatever is just a buzzword. An accurate description for the new wave of hardware is personal supercomputing.

I have absolutely no insider information on one of those chips, but I do not need to because their form is blindingly obvious. The guys at Intel are not going to come up with something completely detached from development of technology up to now, nor are they going to spearhead a completely new and unkown programming model. Instead they are just going to bolt memory closer to the CPU and try to make that work for ML.

That is the way things are going to go. Eventually memristors or something else will finally have the breakthrough they need to be mass produced at scale and that will herald the arrival of hardware good enough to start the Singularity.

7:40pm. I need to get better.

The reason why I keep going out just to lose is because I am not good enough.

If I had the knowledge and the skill to make v0.2 back then, 2018 would have gone so much differently. It is true - a language really does determine your limits as a programer. I might be just spinning my wheels here, but at least unlike many other people in this profession I have a clear path towards getting better. While those guys are making money and accolades, I'll be at the bottom of hell forging my weapons.

It is easy to forget just how much I exceeded the 2016 me in 2018. I will aim to repeat that - in 2020/2021 I will create as big of a gap as during that time. I am going to completely rebirth my programming.

I am never again going to allow myself to be a bitch to the platform.

If programming neurochips turns out to become elaborate, I will in fact write my own GC for those things. It is interesting - there is no reason why I need to limit myself to C style programming on those things. Maybe implementing a simple VM that does reference counting GC or something simple like that would be good.

This is always a principle I am going to follow. There is no need to go the Rust way.

I am a compiler writer. Unlike other programmers, I always have the option of doing the right thing. Spiral is unique amongst high level langauges in how precise memory allocoation it allows. It is never going to be a situation like in C# or Java that too many objects are being passed around.

For reference counting, even weaknesses like not allowing cyclical data structures is something that can easily be avoided in Spiral. You can create cycles using indirect rather than direct pointers easily enough. And you can't do them accidentally. This is not like a dynamic language that you can have an array accidentally point to itself or something like that. RC would have the important advantage of making it easy to manage file handles and GPU memory.

Spiral's efficiency fundamentally changes the tradeoff between GC methods. Not to mention there are huge opportunities to make better GC since it produces strongly typed, monomorphic code. This is not the case with .NET.

I need to keep going forward. I need to level up once more. Without a doubt, the day where I have good and strong agents under my command will arrive. Until then I need to keep gathering my power and increasing my programming skill in whatever way is possible. I cannot forget my desperation, but if I have to lose, it will be in a different way than before.

7:55pm. Now the typechecker - let me just do it. Haskell might be using unification, but it definitely is not using that to typecheck GADTs. Whatever it is doing for that, it is not unification. And whatever it is doing is too much for me so I am not going to concern myself with it at all.

Like imagine if I had to program on some platform that did not have GC. I've already decided that I would just do reference counting if not proper GC. This makes a lot more sense that implementing Rust style memory management using types.

The same goes for GADTs. Having them in the language with play havoc with bottom up parts and just make the language more brittle.

8pm. I hear thunder again so let me run.

8:55pm. Why did I even get up?

I want to finish the rant and then close for the day. The weather has been crap all day, but even without that I'd have spent it in bed just thinking.

And the conclussion is - I need to accept both my wins and losses. In 2018 I won in the sense that I made the ML library. I also established Spiral's staged functional programming style. I also established partial evaluation as a foundation of computation and one of the core ideas in language design. I might not have been the one to discover it, but I was the one to illuminate the worth of join points and compile time inl functions.

The loss is that I could not get far with ML and could not make any money out of that effort. Those are real world concerns.

Also, I need to accept that despite Spiral's strengths, the design of those times has serious issues and that many features and techniques I discovered are just better left in the ground.

v0.2 will be much better and will allow me to level up as a programmer once again.

I need to believe that it is the absolute peak design. Even if it does not have GADTs like Haskell, even if does not allow type driven memory management like Rust. Even with the relatively weak type system compared to Haskell, it still crushes it in terms of expressive power and performance. GADTs do not matter, what really matters is how well the type system interfaces with the partial evaluator side. In that regard, core ML + higher kinded types leads to the highest degree of integration. And that is what really matters.

v0.2 will be my way of proving to the world that the low style is the strongest.

If just expressiveness and power were enough, then I would have met all my goals with Spiral v0.09. Just having a strong foundation is not enough. Types are not enough. Partial evaluation is not enough.

To achieve excellence you need everything - the language has to be geared towards productivity and scaling, the runtime must support the style it enables, and the editor support needs to be there.

I every single mistake I did in 2018 I will fix, every single compromise I will abandon.

For Spiral v0.2 I will pick every single low hanging fruit there is. It will be like a basket of positive habits and good ideas.

9:15pm. It won't be a pileup of research like some other languages. People from the future will marvel as they look at its source and it will be the kind of thing you would prefer to teach in an university. Spiral will be a joy to program in and its libraries won't evoke berserk rage in its users like Scala collections do.

Always and always, you can always put more things in a language. You can have higher ranked types, or GADTs or affine types or...

I am going to place a line exactly where it should be. I am going to pick the point of highest productivity so the language is not about playing games with the type system. I've learned to stop pushing at the end of 2018.

In 2017 and 2018 I was an innovator, but here I will be a thief.

9:20pm. What is the point of winning unless it give you what you want?

I am going to give up on the other languages apart from Spiral. Spiral is the hope of future.

The simpilcity of the language matters. I want Spiral to be both simple to learn and to use and simple under the hood. Not having GADTs saves me a lot of work. Without GADTs, the language will need almost no type annotations (apart for records.) I won't need to put the return type anywhere. I will have true global type inference.

Core ML + HTKs are just so great in the way they interact with the partial evaluator. Them filling in the foralls on their own, means it will be a lot easier to assign them to data structures that need them such as union types and arrays. Those were a huge pain.

Restrictions in the right manner give you as much as they lose you.

For example, splitting an union type (and by that I mean an actual type) in v0.09 would have to be in a tuple, but in v0.2, since they will all have unique keys...


Agh...now that I think about it, I forgot to validate that, let me add a TODO for tomorrow.

| TyPair(TySymbol x, b) -> Map.add x b cases

Actually, let me just fix it now.

                        | TyPair(TySymbol x, b) ->
                            if Map.containsKey x cases then errors.Add(range_of_texpr expr, DuplicateKeyInUnion); cases
                            else Map.add x b cases

Great. I did something for the day. Actually, I realized this while I was in bed, but then forgot it until now.


Let me resume the rant. Since the unions will have unique keys, I can just return their split type as a record. And this will resolve one of the things which was a big headache for me back in v0.09, the thing that caused me to put in that crappy case fold. It will allow fast indexing when it comes to union types. It will really streamline the serialization code. I won't need to do a linear find for every single item of a tuple. And I won't need to do that bizarre fold over the cases.

9:35pm. Adding more structure, putting restrictions into the language helps. I actually did not see this until now, but it will work.

9:40pm. In 2017/2018 I was really very much in love, trying to find the depth of Spiral. But v0.2 will be mature and professional as a language.

9:45pm. Let me go to bed here, this time to sleep rather than to think. Tomorrow I am not sure what I will do, but my motivation is gathering.

I want to work when I work with my full will on the task at hand. But on the other hand I also do not want to be beset by doubts. If the design is really complete then I should not hesitate regarding what I want to do.

My programming is intense, but I'd like it and my life to be more restful. If I have drawn the lines correctly, I should be able to attain that.

I feel like brainstorming a bit more tomorrow and then drawing a line and doing it more normally."


Tuesday 2020-08-04 20:03:22 by Misa

Ax OverlaySurfaceKeyed(), set proper foregroundBuffer blend mode

So, earlier in the development of 2.0, Simon Roth (I presume) encountered a problem: Oh no, all my backgrounds aren't appearing! And this is because my foregroundBuffer, which contains all the drawn tiles, is drawing complete black over it!

So he had a solution that seems ingenius, but is actually really really hacky and super 100% NOT the proper solution. Just, take the foregroundBuffer, iterate over each pixel, and DON'T draw any pixel that's 0xDEADBEEF. 0xDEADBEEF is a special signal meaning "don't draw this pixel". It is called a 'key'.

Unfortunately, this causes a bug where translucent pixels on tiles (pixels between 0% and 100% opacity) didn't get drawn correctly. They would be drawn against this weird blue color.

Now, in #103, I came across this weird constant and decided "hey, this looks awfully like that weird blue color I came across, maybe if I set it to 0x00000000, i.e. complete and transparent black, the issue will be fixed". And it DID appear to be fixed. However, I didn't look too closely, nor did I test it that much, and all that ended up doing was drawing the pixels against black, which more subtly disguised the problem with translucent pixels.

So, after some investigation, I noticed that BlitSurfaceColoured() was drawing translucent pixels just fine. And I thought at the time that there was something wrong with BlitSurfaceStandard(), or something. Further along later I realized that all drawn tiles were passing through this weird OverlaySurfaceKeyed() function. And removing it in favor of a straight SDL_BlitSurface() produced the bug I mentioned above: Oh no, all the backgrounds don't show up, because my foregroundBuffer is drawing pure black over them!

Well... just... set the proper blend mode for foregroundBuffer. It should be SDL_BLENDMODE_BLEND instead of SDL_BLENDMODE_NONE.

Then you don't have to worry about your transparency at all. If you did it right, you won't have to resort this hacky color-keying business.

sigh


Tuesday 2020-08-04 20:22:05 by Ibraheem Ali

Update 10-findable.tex

I removed the brackets on line 200. I put that in because I was worried that not all readers would know what you mean about textual structures and it feels sort-of implied in the way it is currently written that the reader already knows. This document is the first time I'm seeing the term 'textual structure' to describe headings and text-organization based navigation mechanisms, but I know less about this topic - so if you feel like it is sufficiently defined, I am satisfied.

With regard to the changes on line 499 & 500. I rephrased my point - but I am willing to revert it to the original. I'll admit that the phrase "newcomers feel more welcome" was a bit triggering for me. One issue that I have with big, powerful institutions (like UCLA, but many others) is that when they think of diversity and inclusion of new people, they focus on making people feel welcome as if that is the most important issue with regard to individual retention. The community focuses on being extra polite and courteous and 'welcoming' to new people. This is fine, but at the same time institutions tend towards skepticism to new ideas. People in power wont always go out of their way to understand where the newcomers are coming from, they often expect it to be explained to them. For BIPOC entering technical fields trying to make transformative change: when they bring up ideas rooted in social justice or inclusion or anything that goes against the status quo, it is often viewed with a knee-jerk hostility. So the end result is a situation where a newcomer feels welcome in person, but their opinions and ideas are not valued in practice, something they consistently experience throughout their life. Anyway - this long-winded comment is based on context that might be only peripherally relevant to what we are writing here. That said, I always try to insert a point of caution when using the term "welcoming" in the context of newcomers in a field. I tried to reflect this in my revision. Let me know what you think - I am also happy to reverting it back to the original language if what I wrote is still unclear or out of context.


Tuesday 2020-08-04 21:54:29 by Daniel Hazelton

TL;DR - This old man done fucked up and made shit that be really stupid, yo. Much thanks to @DarkGuardsman for pointing out the idiocy and beating me over the head with it.

When I tried to add the Fluid Discharge we started having the old OOM's and NPE's that I thought I'd squashed. After bouncing some ideas off @DarkGuardsman it was suggested that I take a look at "Assembly Line" and "Resonant Induction". In doing so I discovered we could have some helpers in a better place and that the Fluid Discharge was almost acting like "magic" in that it'd discharge fluids into a 32x32x32 area instead of just under it. At the same time it became clear that the code I had for the drain was nonsensical as it looked for anything within a 32 block distance above it, ignoring things like boundary walls and other separators.

This is the start of a rewrite of both to make more sense prior to moving on to adding the other machines


Tuesday 2020-08-04 23:35:00 by Dani Stem

the final project

so what do you think? isnt she lovely? just a little program who wants to talk about breakfast


< 2020-08-04 >